Sample records for discrete probability distribution

  1. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  2. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  3. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  4. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  7. Technology Development Risk Assessment for Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Godsell, Aga M.; Go, Susie

    2006-01-01

    A new approach for assessing development risk associated with technology development projects is presented. The method represents technology evolution in terms of sector-specific discrete development stages. A Monte Carlo simulation is used to generate development probability distributions based on statistical models of the discrete transitions. Development risk is derived from the resulting probability distributions and specific program requirements. Two sample cases are discussed to illustrate the approach, a single rocket engine development and a three-technology space transportation portfolio.

  8. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu

    In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less

  10. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  11. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  12. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  13. Role of conviction in nonequilibrium models of opinion formation

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Anteneodo, Celia

    2012-12-01

    We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).

  14. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  15. A PLUG-AND-PLAY ARCHITECTURE FOR PROBABILISTIC PROGRAMMING

    DTIC Science & Technology

    2017-04-01

    programs that use discrete numerical distributions, but even then, the space of possible outcomes may be uncountable (as a solution can be infinite...also identify conditions guaranteeing that all possible outcomes are finite (and then the probability space is discrete ). 2.2.2 The PlogiQL...and not determined at runtime. Nevertheless, the PRAiSE team plans to extend their solution to support numerical (continuous or discrete

  16. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  17. Exact probability distribution functions for Parrondo's games

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  18. Exact probability distribution functions for Parrondo's games.

    PubMed

    Zadourian, Rubina; Saakian, David B; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  19. Probability Distributions of Minkowski Distances between Discrete Random Variables.

    ERIC Educational Resources Information Center

    Schroger, Erich; And Others

    1993-01-01

    Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)

  20. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  1. Attention as Inference: Selection Is Probabilistic; Responses Are All-or-None Samples

    ERIC Educational Resources Information Center

    Vul, Edward; Hanus, Deborah; Kanwisher, Nancy

    2009-01-01

    Theories of probabilistic cognition postulate that internal representations are made up of multiple simultaneously held hypotheses, each with its own probability of being correct (henceforth, "probability distributions"). However, subjects make discrete responses and report the phenomenal contents of their mind to be all-or-none states rather than…

  2. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  3. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Adaptive Decision Making Using Probabilistic Programming and Stochastic Optimization

    DTIC Science & Technology

    2018-01-01

    world optimization problems (and hence 16 Approved for Public Release (PA); Distribution Unlimited Pred. demand (uncertain; discrete ...simplify the setting, we further assume that the demands are discrete , taking on values d1, . . . , dk with probabilities (conditional on x) (pθ)i ≡ p...Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math ., 2009. Anthony V Fiacco and Yo Ishizuka. Sensitivity and stability

  5. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  6. Robustness of quantum key distribution with discrete and continuous variables to channel noise

    NASA Astrophysics Data System (ADS)

    Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.

    2017-06-01

    We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.

  7. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  8. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  9. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  10. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  11. Energy Distributions in Small Populations: Pascal versus Boltzmann

    ERIC Educational Resources Information Center

    Kugel, Roger W.; Weiner, Paul A.

    2010-01-01

    The theoretical distributions of a limited amount of energy among small numbers of particles with discrete, evenly-spaced quantum levels are examined systematically. The average populations of energy states reveal the pattern of Pascal's triangle. An exact formula for the probability that a particle will be in any given energy state is derived.…

  12. Cluster-based control of a separating flow over a smoothly contoured ramp

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzyński, Marek

    2017-12-01

    The ability to manipulate and control fluid flows is of great importance in many scientific and engineering applications. The proposed closed-loop control framework addresses a key issue of model-based control: The actuation effect often results from slow dynamics of strongly nonlinear interactions which the flow reveals at timescales much longer than the prediction horizon of any model. Hence, we employ a probabilistic approach based on a cluster-based discretization of the Liouville equation for the evolution of the probability distribution. The proposed methodology frames high-dimensional, nonlinear dynamics into low-dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon a state space discretization using a clustering algorithm which groups kinematically similar flow states into a low number of clusters. The temporal evolution of the probability distribution on this set of clusters is then described by a control-dependent Markov model. This Markov model can be used as predictor for the ergodic probability distribution for a particular control law. This probability distribution approximates the long-term behavior of the original system on which basis the optimal control law is determined. We examine how the approach can be used to improve the open-loop actuation in a separating flow dominated by Kelvin-Helmholtz shedding. For this purpose, the feature space, in which the model is learned, and the admissible control inputs are tailored to strongly oscillatory flows.

  13. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  14. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  15. Modeling and simulation of count data.

    PubMed

    Plan, E L

    2014-08-13

    Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.

  16. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  17. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  18. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  19. Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.

    PubMed

    Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo

    2017-12-20

    In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.

  20. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  1. Stylized facts in internal rates of return on stock index and its derivative transactions

    NASA Astrophysics Data System (ADS)

    Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya

    2007-08-01

    Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.

  2. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. VARIANCE ESTIMATION FOR SPATIALLY BALANCED SAMPLES OF ENVIRONMENTAL RESOURCES

    EPA Science Inventory

    The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. We review a unified strategy for designing probability samples of discrete, finite resource populations, such as lakes within som...

  4. A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime

    NASA Astrophysics Data System (ADS)

    Sciarretta, Antonio

    2018-01-01

    This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.

  5. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  6. THE DISTRIBUTION OF ROUNDS FIRED IN STOCHASTIC DUELS

    DTIC Science & Technology

    This paper continues the development of the theory of Stochastic Duels to include the distribution of the number of rounds fired. Most generally...the duel between two contestants who fire at each other with constant kill probabilities per round is considered. The time between rounds fired may be...at the beginning of the duel may be limited and is a discrete random variable. Besides the distribution of rounds fired, its first two moments and

  7. On computational Gestalt detection thresholds.

    PubMed

    Grompone von Gioi, Rafael; Jakubowicz, Jérémie

    2009-01-01

    The aim of this paper is to show some recent developments of computational Gestalt theory, as pioneered by Desolneux, Moisan and Morel. The new results allow to predict much more accurately the detection thresholds. This step is unavoidable if one wants to analyze visual detection thresholds in the light of computational Gestalt theory. The paper first recalls the main elements of computational Gestalt theory. It points out a precision issue in this theory, essentially due to the use of discrete probability distributions. It then proposes to overcome this issue by using continuous probability distributions and illustrates it on the meaningful alignment detector of Desolneux et al.

  8. Diagnosability of Stochastic Chemical Kinetic Systems: A Discrete Event Systems Approach (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    USA. E -mail: thorsley@u.washington.edu. This research is partially supported by the 2006 AFOSR MURI award “High Confidence Design for Distributed...occurrence of the finite sample path ω. These distributions are defined recursively to be π0(x) := π0(x), πωσ(x ′) := ∑ x∈X πω(x)r(x ′,σ | x) e −r(x ′,σ|x... e −rxτ . (2) This probability is this probability that the arrival time of the first event is greater than τ . For finite sample paths with strings

  9. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  10. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-07-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  13. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    ERIC Educational Resources Information Center

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  14. Distributed fault detection over sensor networks with Markovian switching topologies

    NASA Astrophysics Data System (ADS)

    Ge, Xiaohua; Han, Qing-Long

    2014-05-01

    This paper deals with the distributed fault detection for discrete-time Markov jump linear systems over sensor networks with Markovian switching topologies. The sensors are scatteredly deployed in the sensor field and the fault detectors are physically distributed via a communication network. The system dynamics changes and sensing topology variations are modeled by a discrete-time Markov chain with incomplete mode transition probabilities. Each of these sensor nodes firstly collects measurement outputs from its all underlying neighboring nodes, processes these data in accordance with the Markovian switching topologies, and then transmits the processed data to the remote fault detector node. Network-induced delays and accumulated data packet dropouts are incorporated in the data transmission between the sensor nodes and the distributed fault detector nodes through the communication network. To generate localized residual signals, mode-independent distributed fault detection filters are proposed. By means of the stochastic Lyapunov functional approach, the residual system performance analysis is carried out such that the overall residual system is stochastically stable and the error between each residual signal and the fault signal is made as small as possible. Furthermore, a sufficient condition on the existence of the mode-independent distributed fault detection filters is derived in the simultaneous presence of incomplete mode transition probabilities, Markovian switching topologies, network-induced delays, and accumulated data packed dropouts. Finally, a stirred-tank reactor system is given to show the effectiveness of the developed theoretical results.

  15. The extraction and integration framework: a two-process account of statistical learning.

    PubMed

    Thiessen, Erik D; Kronstein, Alexandra T; Hufnagle, Daniel G

    2013-07-01

    The term statistical learning in infancy research originally referred to sensitivity to transitional probabilities. Subsequent research has demonstrated that statistical learning contributes to infant development in a wide array of domains. The range of statistical learning phenomena necessitates a broader view of the processes underlying statistical learning. Learners are sensitive to a much wider range of statistical information than the conditional relations indexed by transitional probabilities, including distributional and cue-based statistics. We propose a novel framework that unifies learning about all of these kinds of statistical structure. From our perspective, learning about conditional relations outputs discrete representations (such as words). Integration across these discrete representations yields sensitivity to cues and distributional information. To achieve sensitivity to all of these kinds of statistical structure, our framework combines processes that extract segments of the input with processes that compare across these extracted items. In this framework, the items extracted from the input serve as exemplars in long-term memory. The similarity structure of those exemplars in long-term memory leads to the discovery of cues and categorical structure, which guides subsequent extraction. The extraction and integration framework provides a way to explain sensitivity to both conditional statistical structure (such as transitional probabilities) and distributional statistical structure (such as item frequency and variability), and also a framework for thinking about how these different aspects of statistical learning influence each other. 2013 APA, all rights reserved

  16. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  17. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  18. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  19. Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem

    2015-06-01

    Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.

  20. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  1. In Defense of the Chi-Square Continuity Correction.

    ERIC Educational Resources Information Center

    Veldman, Donald J.; McNemar, Quinn

    Published studies of the sampling distribution of chi-square with and without Yates' correction for continuity have been interpreted as discrediting the correction. Yates' correction actually produces a biased chi-square value which in turn yields a better estimate of the exact probability of the discrete event concerned when used in conjunction…

  2. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  3. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    DTIC Science & Technology

    2016-09-01

    is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1

  4. A Process Improvement Study on a Military System of Clinics to Manage Patient Demand and Resource Utilization Using Discrete-Event Simulation, Sensitivity Analysis, and Cost-Benefit Analysis

    DTIC Science & Technology

    2015-03-12

    26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to

  5. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  6. Nanotube Tunneling as a Consequence of Probable Discrete Trajectories

    NASA Technical Reports Server (NTRS)

    Robinson, Daryl C.

    2001-01-01

    It has been recently reported that the electrical charge in a semiconductive carbon nanotube is not evenly distributed, but is divided into charge "islands." A clear understanding of tunneling phenomena can be useful to elucidate the mechanism for electrical conduction in nanotubes. This paper represents the first attempt to shed light on the aforementioned phenomenon through viewing tunneling as a natural consequence of "discrete trajectories." The relevance of this analysis is that it may provide further insight into the higher rate of tunneling processes, which makes tunneling devices attractive. In a situation involving particles impinging on a classically impenetrable barrier, the result of quantum mechanics that the probability of detecting transmitted particles falls off exponentially is derived without wave theory. This paper should provide a basis for calculating the charge profile over the length of the tube so that nanoscale devices' conductive properties may be fully exploited.

  7. Solutions of burnt-bridge models for molecular motor transport.

    PubMed

    Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B; Artyomov, Maxim N

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called "bridges"), is investigated theoretically by analyzing discrete-state stochastic "burnt-bridge" models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed ("burned") with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  8. Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  9. Solutions of burnt-bridge models for molecular motor transport

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander Yu.; Pronina, Ekaterina; Kolomeisky, Anatoly B.; Artyomov, Maxim N.

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called “bridges”), is investigated theoretically by analyzing discrete-state stochastic “burnt-bridge” models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (“burned”) with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  10. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  11. A Brownian Bridge Movement Model to Track Mobile Targets

    DTIC Science & Technology

    2016-09-01

    breakout of Chinese forces in the South China Sea. Probability heat maps, depicting the probability of a target location at discrete times, are...achieve a higher probability of detection, it is more effective to have sensors cover a wider area at fewer discrete points in time than to have a...greater number of discrete looks using sensors covering smaller areas. 14. SUBJECT TERMS Brownian bridge movement models, unmanned sensors

  12. Exploration properties of biased evanescent random walkers on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Esguerra, Jose Perico; Reyes, Jelian

    2017-08-01

    We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.

  13. Adaptive hybrid simulations for multiscale stochastic reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less

  14. Adaptive hybrid simulations for multiscale stochastic reaction networks.

    PubMed

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  15. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  16. Application of Bayesian Reliability Concepts to Cruise Missile Electronic Components

    DTIC Science & Technology

    1989-09-01

    and contrast them with the more prevalent classical inference view. 3 II. literature Review Introduction This literature review will consider current ...events on the basis of whatever evidence is currently available. Then if additional evidence is subsequently obtained, the initial probabilities are...Chay contends there is no longer any need to approximate continuous prior distributions through discretization because current computer calculations

  17. Discrete hierarchy of sizes and performances in the exchange-traded fund universe

    NASA Astrophysics Data System (ADS)

    Vandermarliere, B.; Ryckebusch, J.; Schoors, K.; Cauwels, P.; Sornette, D.

    2017-03-01

    Using detailed statistical analyses of the size distribution of a universe of equity exchange-traded funds (ETFs), we discover a discrete hierarchy of sizes, which imprints a log-periodic structure on the probability distribution of ETF sizes that dominates the details of the asymptotic tail. This allows us to propose a classification of the studied universe of ETFs into seven size layers approximately organized according to a multiplicative ratio of 3.5 in their total market capitalization. Introducing a similarity metric generalizing the Herfindhal index, we find that the largest ETFs exhibit a significantly stronger intra-layer and inter-layer similarity compared with the smaller ETFs. Comparing the performance across the seven discerned ETF size layers, we find an inverse size effect, namely large ETFs perform significantly better than the small ones both in 2014 and 2015.

  18. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  19. Discrete Thermodynamics

    DOE PAGES

    Margolin, L. G.; Hunter, A.

    2017-10-18

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  20. Discrete Thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.; Hunter, A.

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  1. Repelling, binding, and oscillating of two-particle discrete-time quantum walks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qinghao; Li, Zhi-Jian, E-mail: zjli@sxu.edu.cn

    In this paper, we investigate the effects of particle–particle interaction and static force on the propagation of probability distribution in two-particle discrete-time quantum walk, where the interaction and static force are expressed as a collision phase and a linear position-dependent phase, respectively. It is found that the interaction can lead to boson repelling and fermion binding. The static force also induces Bloch oscillation and results in a continuous transition from boson bunching to fermion anti-bunching. The interplays of particle–particle interaction, quantum interference, and Bloch oscillation provide a versatile framework to study and simulate many-particle physics via quantum walks.

  2. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  3. Muscle categorization using PDF estimation and Naive Bayes classification.

    PubMed

    Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W

    2012-01-01

    The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn; Ide, Yusuke

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coinmore » and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.« less

  7. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  8. Price percolation model

    NASA Astrophysics Data System (ADS)

    Kanai, Yasuhiro; Abe, Keiji; Seki, Yoichi

    2015-06-01

    We propose a price percolation model to reproduce the price distribution of components used in industrial finished goods. The intent is to show, using the price percolation model and a component category as an example, that percolation behaviors, which exist in the matter system, the ecosystem, and human society, also exist in abstract, random phenomena satisfying the power law. First, we discretize the total potential demand for a component category, considering it a random field. Second, we assume that the discretized potential demand corresponding to a function of a finished good turns into actual demand if the difficulty of function realization is less than the maximum difficulty of the realization. The simulations using this model suggest that changes in a component category's price distribution are due to changes in the total potential demand corresponding to the lattice size and the maximum difficulty of realization, which is an occupation probability. The results are verified using electronic components' sales data.

  9. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  10. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    PubMed

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  11. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  12. Violation of continuous-variable Einstein-Podolsky-Rosen steering with discrete measurements.

    PubMed

    Schneeloch, James; Dixon, P Ben; Howland, Gregory A; Broadbent, Curtis J; Howell, John C

    2013-03-29

    In this Letter, we derive an entropic Einstein-Podolsky-Rosen (EPR) steering inequality for continuous-variable systems using only experimentally measured discrete probability distributions and details of the measurement apparatus. We use this inequality to witness EPR steering between the positions and momenta of photon pairs generated in spontaneous parametric down-conversion. We examine the asymmetry between parties in this inequality, and show that this asymmetry can be used to reduce the technical requirements of experimental setups intended to demonstrate the EPR paradox. Furthermore, we develop a more stringent steering inequality that is symmetric between parties, and use it to show that the down-converted photon pairs also exhibit symmetric EPR steering.

  13. Violation of Continuous-Variable Einstein-Podolsky-Rosen Steering with Discrete Measurements

    NASA Astrophysics Data System (ADS)

    Schneeloch, James; Dixon, P. Ben; Howland, Gregory A.; Broadbent, Curtis J.; Howell, John C.

    2013-03-01

    In this Letter, we derive an entropic Einstein-Podolsky-Rosen (EPR) steering inequality for continuous-variable systems using only experimentally measured discrete probability distributions and details of the measurement apparatus. We use this inequality to witness EPR steering between the positions and momenta of photon pairs generated in spontaneous parametric down-conversion. We examine the asymmetry between parties in this inequality, and show that this asymmetry can be used to reduce the technical requirements of experimental setups intended to demonstrate the EPR paradox. Furthermore, we develop a more stringent steering inequality that is symmetric between parties, and use it to show that the down-converted photon pairs also exhibit symmetric EPR steering.

  14. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  15. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  16. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  17. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  18. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  19. Probabilistic #D data fusion for multiresolution surface generation

    NASA Technical Reports Server (NTRS)

    Manduchi, R.; Johnson, A. E.

    2002-01-01

    In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.

  20. Distinguishability notion based on Wootters statistical distance: Application to discrete maps

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.

    2017-08-01

    We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.

  1. A SAS-based solution to evaluate study design efficiency of phase I pediatric oncology trials via discrete event simulation.

    PubMed

    Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M

    2008-06-01

    Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.

  2. Bayesian probability of success for clinical trials using historical data

    PubMed Central

    Ibrahim, Joseph G.; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F.; Heyse, Joseph F.

    2015-01-01

    Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein’s work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. PMID:25339499

  3. Bayesian probability of success for clinical trials using historical data.

    PubMed

    Ibrahim, Joseph G; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F; Heyse, Joseph F

    2015-01-30

    Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  5. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  6. Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences

    NASA Astrophysics Data System (ADS)

    Aurzada, Frank; Buck, Micha

    2018-02-01

    We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.

  7. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  8. Computer simulation of the probability that endangered whales will interact with oil spills, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Jayko, K.; Bowles, A.

    1986-10-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.

  9. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    PubMed

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  10. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    PubMed Central

    Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.

    2018-01-01

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588

  11. Finite element probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacvarov, D.C.

    1981-01-01

    A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less

  12. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  13. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  14. Mean, covariance, and effective dimension of stochastic distributed delay dynamics

    NASA Astrophysics Data System (ADS)

    René, Alexandre; Longtin, André

    2017-11-01

    Dynamical models are often required to incorporate both delays and noise. However, the inherently infinite-dimensional nature of delay equations makes formal solutions to stochastic delay differential equations (SDDEs) challenging. Here, we present an approach, similar in spirit to the analysis of functional differential equations, but based on finite-dimensional matrix operators. This results in a method for obtaining both transient and stationary solutions that is directly amenable to computation, and applicable to first order differential systems with either discrete or distributed delays. With fewer assumptions on the system's parameters than other current solution methods and no need to be near a bifurcation, we decompose the solution to a linear SDDE with arbitrary distributed delays into natural modes, in effect the eigenfunctions of the differential operator, and show that relatively few modes can suffice to approximate the probability density of solutions. Thus, we are led to conclude that noise makes these SDDEs effectively low dimensional, which opens the possibility of practical definitions of probability densities over their solution space.

  15. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    PubMed

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm) on the Boolean state space. This software, parallelized and optimized, computes the temporal evolution of probability distributions and estimates stationary distributions. Applications of the Boolean Kinetic Monte-Carlo are demonstrated for three qualitative models: a toy model, a published model of p53/Mdm2 interaction and a published model of the mammalian cell cycle. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.

  16. On defense strategies for system of systems using aggregated correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.

    2017-04-01

    We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less

  17. How LO Can You GO? Using the Dice-Based Golf Game GOLO to Illustrate Inferences on Proportions and Discrete Probability Distributions

    ERIC Educational Resources Information Center

    Stephenson, Paul; Richardson, Mary; Gabrosek, John; Reischman, Diann

    2009-01-01

    This paper describes an interactive activity that revolves around the dice-based golf game GOLO. The GOLO game can be purchased at various retail locations or online at igolo.com. In addition, the game may be played online free of charge at igolo.com. The activity is completed in four parts. The four parts can be used in a sequence or they can be…

  18. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  19. Modeling the distribution of colonial species to improve estimation of plankton concentration in ballast water

    NASA Astrophysics Data System (ADS)

    Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah

    2018-03-01

    The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and < 50 μm, collected from five different depths of a ballast tank of a commercial ship in three independent surveys. We show that the data fit to the negative binomial and the hierarchical probability models equally well, with both models performing better than the Poisson model at the scale of our sampling. The hierarchical probability model, which accounts for both the individuals and the colonies in a sample, reduces the uncertainty associated with the concentration estimation, and improves the power of rejecting the decision on ship's compliance when a ship does not truly comply with the standard. We show examples of how to test ballast water compliance using the above models.

  20. On the optimal identification of tag sets in time-constrained RFID configurations.

    PubMed

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  1. On the continuity of the stationary state distribution of DPCM

    NASA Astrophysics Data System (ADS)

    Naraghi-Pour, Morteza; Neuhoff, David L.

    1990-03-01

    Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.

  2. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  3. Simulation modeling for the health care manager.

    PubMed

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  4. How human drivers control their vehicle

    NASA Astrophysics Data System (ADS)

    Wagner, P.

    2006-08-01

    The data presented here show that human drivers apply a discrete noisy control mechanism to drive their vehicle. A car-following model built on these observations, together with some physical limitations (crash-freeness, acceleration), lead to non-Gaussian probability distributions in the speed difference and distance which are in good agreement with empirical data. All model parameters have a clear physical meaning and can be measured. Despite its apparent complexity, this model is simple to understand and might serve as a starting point to develop even quantitatively correct models.

  5. An Army-Centric System of Systems Analysis (SoSA) Definition

    DTIC Science & Technology

    2011-02-01

    1994, 19, 49–74. 34. Suzuki, K.; Ikegami , T . Homeodynamics in the Game of Life. In Artificial Life XI: Proceedings of the Eleventh International...insect displacement as a function of sampling time. (b) The same dataset displaying displacement at time t versus displacement at time t +  t ...probability distribution of x(ti), x( t i + 1), x( t i + 2), …, x( t i + m - 1) is dependent upon the value of ti (21). Similarly, a discrete time series

  6. Sampling--how big a sample?

    PubMed

    Aitken, C G

    1999-07-01

    It is thought that, in a consignment of discrete units, a certain proportion of the units contain illegal material. A sample of the consignment is to be inspected. Various methods for the determination of the sample size are compared. The consignment will be considered as a random sample from some super-population of units, a certain proportion of which contain drugs. For large consignments, a probability distribution, known as the beta distribution, for the proportion of the consignment which contains illegal material is obtained. This distribution is based on prior beliefs about the proportion. Under certain specific conditions the beta distribution gives the same numerical results as an approach based on the binomial distribution. The binomial distribution provides a probability for the number of units in a sample which contain illegal material, conditional on knowing the proportion of the consignment which contains illegal material. This is in contrast to the beta distribution which provides probabilities for the proportion of a consignment which contains illegal material, conditional on knowing the number of units in the sample which contain illegal material. The interpretation when the beta distribution is used is much more intuitively satisfactory. It is also much more flexible in its ability to cater for prior beliefs which may vary given the different circumstances of different crimes. For small consignments, a distribution, known as the beta-binomial distribution, for the number of units in the consignment which are found to contain illegal material, is obtained, based on prior beliefs about the number of units in the consignment which are thought to contain illegal material. As with the beta and binomial distributions for large samples, it is shown that, in certain specific conditions, the beta-binomial and hypergeometric distributions give the same numerical results. However, the beta-binomial distribution, as with the beta distribution, has a more intuitively satisfactory interpretation and greater flexibility. The beta and the beta-binomial distributions provide methods for the determination of the minimum sample size to be taken from a consignment in order to satisfy a certain criterion. The criterion requires the specification of a proportion and a probability.

  7. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  8. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  9. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  10. H theorem for generalized entropic forms within a master-equation framework

    NASA Astrophysics Data System (ADS)

    Casas, Gabriela A.; Nobre, Fernando D.; Curado, Evaldo M. F.

    2016-03-01

    The H theorem is proven for generalized entropic forms, in the case of a discrete set of states. The associated probability distributions evolve in time according to a master equation, for which the corresponding transition rates depend on these entropic forms. An important equation describing the time evolution of the transition rates and probabilities in such a way as to drive the system towards an equilibrium state is found. In the particular case of Boltzmann-Gibbs entropy, it is shown that this equation is satisfied in the microcanonical ensemble only for symmetric probability transition rates, characterizing a single path to the equilibrium state. This equation fulfils the proof of the H theorem for generalized entropic forms, associated with systems characterized by complex dynamics, e.g., presenting nonsymmetric probability transition rates and more than one path towards the same equilibrium state. Some examples considering generalized entropies of the literature are discussed, showing that they should be applicable to a wide range of natural phenomena, mainly those within the realm of complex systems.

  11. Defense strategies for asymmetric networked systems under composite utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less

  12. Metabolic networks evolve towards states of maximum entropy production.

    PubMed

    Unrean, Pornkamol; Srienc, Friedrich

    2011-11-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Mode switching in volcanic seismicity: El Hierro 2011-2013

    NASA Astrophysics Data System (ADS)

    Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.

    2016-05-01

    The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.

  14. Discretized kinetic theory on scale-free networks

    NASA Astrophysics Data System (ADS)

    Bertotti, Maria Letizia; Modanese, Giovanni

    2016-10-01

    The network of interpersonal connections is one of the possible heterogeneous factors which affect the income distribution emerging from micro-to-macro economic models. In this paper we equip our model discussed in [1, 2] with a network structure. The model is based on a system of n differential equations of the kinetic discretized-Boltzmann kind. The network structure is incorporated in a probabilistic way, through the introduction of a link density P(α) and of correlation coefficients P(β|α), which give the conditioned probability that an individual with α links is connected to one with β links. We study the properties of the equations and give analytical results concerning the existence, normalization and positivity of the solutions. For a fixed network with P(α) = c/α q , we investigate numerically the dependence of the detailed and marginal equilibrium distributions on the initial conditions and on the exponent q. Our results are compatible with those obtained from the Bouchaud-Mezard model and from agent-based simulations, and provide additional information about the dependence of the individual income on the level of connectivity.

  15. Explore Stochastic Instabilities of Periodic Points by Transition Path Theory

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Lin, Ling; Zhou, Xiang

    2016-06-01

    We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.

  16. Dynamical quantum phase transitions in discrete time crystals

    NASA Astrophysics Data System (ADS)

    Kosior, Arkadiusz; Sacha, Krzysztof

    2018-05-01

    Discrete time crystals are related to nonequilibrium dynamics of periodically driven quantum many-body systems where the discrete time-translation symmetry of the Hamiltonian is spontaneously broken into another discrete symmetry. Recently, the concept of phase transitions has been extended to nonequilibrium dynamics of time-independent systems induced by a quantum quench, i.e., a sudden change of some parameter of the Hamiltonian. There, the return probability of a system to the ground state reveals singularities in time which are dubbed dynamical quantum phase transitions. We show that the quantum quench in a discrete time crystal leads to dynamical quantum phase transitions where the return probability of a periodically driven system to a Floquet eigenstate before the quench reveals singularities in time. It indicates that dynamical quantum phase transitions are not restricted to time-independent systems and can be also observed in systems that are periodically driven. We discuss how the phenomenon can be observed in ultracold atomic gases.

  17. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  18. On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels

    DTIC Science & Technology

    2013-12-01

    Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks

  19. Evolution of the microstructure during the process of consolidation and bonding in soft granular solids.

    PubMed

    Yohannes, B; Gonzalez, M; Abebe, A; Sprockel, O; Nikfar, F; Kiang, S; Cuitiño, A M

    2016-04-30

    The evolution of microstructure during powder compaction process was investigated using a discrete particle modeling, which accounts for particle size distribution and material properties, such as plasticity, elasticity, and inter-particle bonding. The material properties were calibrated based on powder compaction experiments and validated based on tensile strength test experiments for lactose monohydrate and microcrystalline cellulose, which are commonly used excipient in pharmaceutical industry. The probability distribution function and the orientation of contact forces were used to study the evolution of the microstructure during the application of compaction pressure, unloading, and ejection of the compact from the die. The probability distribution function reveals that the compression contact forces increase as the compaction force increases (or the relative density increases), while the maximum value of the tensile contact forces remains the same. During unloading of the compaction pressure, the distribution approaches a normal distribution with a mean value of zero. As the contact forces evolve, the anisotropy of the powder bed also changes. Particularly, during loading, the compression contact forces are aligned along the direction of the compaction pressure, whereas the tensile contact forces are oriented perpendicular to direction of the compaction pressure. After ejection, the contact forces become isotropic. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Dynamical Localization for Discrete Anderson Dirac Operators

    NASA Astrophysics Data System (ADS)

    Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.

    2017-04-01

    We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.

  1. [Correlative analysis of the diversity patterns of regional surface water, NDVI and thermal environment].

    PubMed

    Duan, Jin-Long; Zhang, Xue-Lei

    2012-10-01

    Taking Zhengzhou City, the capital of Henan Province in Central China, as the study area, and by using the theories and methodologies of diversity, a discreteness evaluation on the regional surface water, normalized difference vegetation index (NDVI), and land surface temperature (LST) distribution was conducted in a 2 km x 2 km grid scale. Both the NDVI and the LST were divided into 4 levels, their spatial distribution diversity indices were calculated, and their connections were explored. The results showed that it was of operability and practical significance to use the theories and methodologies of diversity in the discreteness evaluation of the spatial distribution of regional thermal environment. There was a higher overlap of location between the distributions of surface water and the lowest temperature region, and the high vegetation coverage was often accompanied by low land surface temperature. In 1988-2009, the discreteness of the surface water distribution in the City had an obvious decreasing trend. The discreteness of the surface water distribution had a close correlation with the discreteness of the temperature region distribution, while the discreteness of the NDVI classification distribution had a more complicated correlation with the discreteness of the temperature region distribution. Therefore, more environmental factors were needed to be included for a better evaluation.

  2. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  3. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  4. Dissipative open systems theory as a foundation for the thermodynamics of linear systems.

    PubMed

    Delvenne, Jean-Charles; Sandberg, Henrik

    2017-03-06

    In this paper, we advocate the use of open dynamical systems, i.e. systems sharing input and output variables with their environment, and the dissipativity theory initiated by Jan Willems as models of thermodynamical systems, at the microscopic and macroscopic level alike. We take linear systems as a study case, where we show how to derive a global Lyapunov function to analyse networks of interconnected systems. We define a suitable notion of dynamic non-equilibrium temperature that allows us to derive a discrete Fourier law ruling the exchange of heat between lumped, discrete-space systems, enriched with the Maxwell-Cattaneo correction. We complete these results by a brief recall of the steps that allow complete derivation of the dissipation and fluctuation in macroscopic systems (i.e. at the level of probability distributions) from lossless and deterministic systems.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).

  5. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  6. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  7. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  8. Some Classes of Imperfect Information Finite State-Space Stochastic Games with Finite-Dimensional Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEneaney, William M.

    2004-08-15

    Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less

  9. An Optimization-Based Framework for the Transformation of Incomplete Biological Knowledge into a Probabilistic Structure and Its Application to the Utilization of Gene/Protein Signaling Pathways in Discrete Phenotype Classification.

    PubMed

    Esfahani, Mohammad Shahrokh; Dougherty, Edward R

    2015-01-01

    Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.

  10. On the distribution of interspecies correlation for Markov models of character evolution on Yule trees.

    PubMed

    Mulder, Willem H; Crawford, Forrest W

    2015-01-07

    Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  12. Probabilistic approach to lysozyme crystal nucleation kinetics.

    PubMed

    Dimitrov, Ivaylo L; Hodzhaoglu, Feyzim V; Koleva, Dobryana P

    2015-09-01

    Nucleation of lysozyme crystals in quiescent solutions at a regime of progressive nucleation is investigated under an optical microscope at conditions of constant supersaturation. A method based on the stochastic nature of crystal nucleation and using discrete time sampling of small solution volumes for the presence or absence of detectable crystals is developed. It allows probabilities for crystal detection to be experimentally estimated. One hundred single samplings were used for each probability determination for 18 time intervals and six lysozyme concentrations. Fitting of a particular probability function to experimentally obtained data made possible the direct evaluation of stationary rates for lysozyme crystal nucleation, the time for growth of supernuclei to a detectable size and probability distribution of nucleation times. Obtained stationary nucleation rates were then used for the calculation of other nucleation parameters, such as the kinetic nucleation factor, nucleus size, work for nucleus formation and effective specific surface energy of the nucleus. The experimental method itself is simple and adaptable and can be used for crystal nucleation studies of arbitrary soluble substances with known solubility at particular solution conditions.

  13. Game-theoretic strategies for asymmetric networked systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less

  14. Distribution pattern of public transport passenger in Yogyakarta, Indonesia

    NASA Astrophysics Data System (ADS)

    Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya

    2018-03-01

    The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.

  15. How Bob Barker Would (Probably) Teach Discrete Mathematics

    ERIC Educational Resources Information Center

    Urness, Timothy

    2010-01-01

    This article proposes a discrete mathematics course in which games from "The Price Is Right" are used to engage students in a deeper, practical study of discrete mathematics. The games themselves are not the focus of the course; rather, the mathematical principles of the games give motivation for the concepts being taught. The game examples are…

  16. Warship Combat System Selection Methodology Based on Discrete Event Simulation

    DTIC Science & Technology

    2010-09-01

    Platform (from Spanish) PD Damage Probability xiv PHit Hit Probability PKill Kill Probability RSM Response Surface Model SAM Surface-Air Missile...such a large target allows an assumption that the probability of a hit ( PHit ) is one. This structure can be considered as a bridge; therefore, the

  17. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  18. Dynamic properties of molecular motors in burnt-bridge models

    NASA Astrophysics Data System (ADS)

    Artyomov, Maxim N.; Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B.

    2007-08-01

    Dynamic properties of molecular motors that fuel their motion by actively interacting with underlying molecular tracks are studied theoretically via discrete-state stochastic 'burnt-bridge' models. The transport of the particles is viewed as an effective diffusion along one-dimensional lattices with periodically distributed weak links. When an unbiased random walker passes the weak link it can be destroyed ('burned') with probability p, providing a bias in the motion of the molecular motor. We present a theoretical approach that allows one to calculate exactly all dynamic properties of motor proteins, such as velocity and dispersion, under general conditions. It is found that dispersion is a decreasing function of the concentration of bridges, while the dependence of dispersion on the burning probability is more complex. Our calculations also show a gap in dispersion for very low concentrations of weak links or for very low burning probabilities which indicates a dynamic phase transition between unbiased and biased diffusion regimes. Theoretical findings are supported by Monte Carlo computer simulations.

  19. Studies on the latitudinal distribution of ground-based geomagnetic pulsations and fluctuations in the interplanetary medium using discrete mathematical analysis methods

    NASA Astrophysics Data System (ADS)

    Zelinsky, N. R.; Kleimenova, N. G.; Malysheva, L. M.

    2014-07-01

    Ground-based geomagnetic Pc5 (2-7 mHz) pulsations, caused by the passage of dense transients (density disturbances) in the solar wind, were analyzed. It was shown that intensive bursts can appear in the density of the solar wind and its fluctuations, up to Np ˜ 30-50 cm3, even during the most magnetically calm year in the past decades (2009). The analysis, performed using one of the latest methods of discrete mathematical analysis (DMA), is presented. The energy functional of a time-series fragment (called "anomaly rectification" in DMA terms) of two such events was calculated. It was established that fluctuations in the dynamic pressure (density) of the solar wind (SW) cause the global excitation of Pc5 geomagnetic pulsations in the daytime sector of the Earth's magnetosphere, i.e., from polar to equatorial latitudes. Such pulsations started and ended suddenly and simultaneously at all latitudes. Fluctuations in the interplanetary magnetic field (IMF) have turned up to be less geoeffective in exciting geomagnetic pulsations than fluctuations in the SW density. The pulsation generation mechanisms in various structural regions of the magnetosphere were probably different. It was therefore concluded that the most probable source of ground-based pulsations are fluctuations of the corresponding periods in the SW density.

  20. Hybrid discrete/continuum algorithms for stochastic reaction networks

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less

  1. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  2. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  3. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  4. Modeling Invasion Dynamics with Spatial Random-Fitness Due to Micro-Environment

    PubMed Central

    Manem, V. S. K.; Kaveh, K.; Kohandel, M.; Sivaloganathan, S.

    2015-01-01

    Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids (in particular, square lattices) under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics. PMID:26509572

  5. Scale matters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  6. Scale matters

    DOE PAGES

    Margolin, L. G.

    2018-03-19

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  7. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  8. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  9. Specifying the Probability Characteristics of Funnel Plot Control Limits: An Investigation of Three Approaches

    PubMed Central

    Manktelow, Bradley N.; Seaton, Sarah E.

    2012-01-01

    Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202

  10. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  11. Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification

    DTIC Science & Technology

    2014-09-18

    and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems

  12. The Rainbow Spectrum of RNA Secondary Structures.

    PubMed

    Li, Thomas J X; Reidys, Christian M

    2018-06-01

    In this paper, we analyze the length spectrum of rainbows in RNA secondary structures. A rainbow in a secondary structure is a maximal arc with respect to the partial order induced by nesting. We show that there is a significant gap in this length spectrum. We shall prove that there asymptotically almost surely exists a unique longest rainbow of length at least [Formula: see text] and that with high probability any other rainbow has finite length. We show that the distribution of the length of the longest rainbow converges to a discrete limit law and that, for finite k, the distribution of rainbows of length k becomes for large n a negative binomial distribution. We then put the results of this paper into context, comparing the analytical results with those observed in RNA minimum free energy structures, biological RNA structures and relate our findings to the sparsification of folding algorithms.

  13. Variable Weight Fractional Collisions for Multiple Species Mixtures

    DTIC Science & Technology

    2017-08-28

    DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in

  14. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  15. Dinosaurs, Dinosaur Eggs, and Probability.

    ERIC Educational Resources Information Center

    Teppo, Anne R.; Hodgson, Ted

    2001-01-01

    Outlines several recommendations for teaching probability in the secondary school. Offers an activity that employs simulation by hand and using a programmable calculator in which geometry, analytical geometry, and discrete mathematics are explored. (KHR)

  16. More than a filter: Feature-based attention regulates the distribution of visual working memory resources.

    PubMed

    Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem

    2017-10-01

    Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  18. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  19. Reconcile Planck-scale discreteness and the Lorentz-Fitzgerald contraction

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2003-03-01

    A Planck-scale minimal observable length appears in many approaches to quantum gravity. It is sometimes argued that this minimal length might conflict with Lorentz invariance, because a boosted observer can see the minimal length further Lorentz contracted. We show that this is not the case within loop quantum gravity. In loop quantum gravity the minimal length (more precisely, minimal area) does not appear as a fixed property of geometry, but rather as the minimal (nonzero) eigenvalue of a quantum observable. The boosted observer can see the same observable spectrum, with the same minimal area. What changes continuously in the boost transformation is not the value of the minimal length: it is the probability distribution of seeing one or the other of the discrete eigenvalues of the area. We discuss several difficulties associated with boosts and area measurement in quantum gravity. We compute the transformation of the area operator under a local boost, propose an explicit expression for the generator of local boosts, and give the conditions under which its action is unitary.

  20. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  1. Stochastic analysis of a pulse-type prey-predator model

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Zhu, W. Q.

    2008-04-01

    A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.

  2. Stochastic analysis of a pulse-type prey-predator model.

    PubMed

    Wu, Y; Zhu, W Q

    2008-04-01

    A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.

  3. Discrete Latent Markov Models for Normally Distributed Response Data

    ERIC Educational Resources Information Center

    Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.

    2005-01-01

    Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…

  4. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  5. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  6. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  7. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  8. Scale matters

    NASA Astrophysics Data System (ADS)

    Margolin, L. G.

    2018-04-01

    The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.

  9. Evaluating sample allocation and effort in detecting population differentiation for discrete and continuously distributed individuals

    Treesearch

    Erin L. Landguth; Michael K. Schwartz

    2014-01-01

    One of the most pressing issues in spatial genetics concerns sampling. Traditionally, substructure and gene flow are estimated for individuals sampled within discrete populations. Because many species may be continuously distributed across a landscape without discrete boundaries, understanding sampling issues becomes paramount. Given large-scale, geographically broad...

  10. Distributed Relaxation for Conservative Discretizations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2001-01-01

    A multigrid method is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work that is a small (less than 10) multiple of the operation count in one target-grid residual evaluation. The way to achieve this efficiency is the distributed relaxation approach. TME solvers employing distributed relaxation have already been demonstrated for nonconservative formulations of high-Reynolds-number viscous incompressible and subsonic compressible flow regimes. The purpose of this paper is to provide foundations for applications of distributed relaxation to conservative discretizations. A direct correspondence between the primitive variable interpolations for calculating fluxes in conservative finite-volume discretizations and stencils of the discretized derivatives in the nonconservative formulation has been established. Based on this correspondence, one can arrive at a conservative discretization which is very efficiently solved with a nonconservative relaxation scheme and this is demonstrated for conservative discretization of the quasi one-dimensional Euler equations. Formulations for both staggered and collocated grid arrangements are considered and extensions of the general procedure to multiple dimensions are discussed.

  11. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  12. Simulation of Ectopic Pacemakers in the Heart: Multiple Ectopic Beats Generated by Reentry inside Fibrotic Regions

    PubMed Central

    Gouvêa de Barros, Bruno; Weber dos Santos, Rodrigo; Alonso, Sergio

    2015-01-01

    The inclusion of nonconducting media, mimicking cardiac fibrosis, in two models of cardiac tissue produces the formation of ectopic beats. The fraction of nonconducting media in comparison with the fraction of healthy myocytes and the topological distribution of cells determines the probability of ectopic beat generation. First, a detailed subcellular microscopic model that accounts for the microstructure of the cardiac tissue is constructed and employed for the numerical simulation of action potential propagation. Next, an equivalent discrete model is implemented, which permits a faster integration of the equations. This discrete model is a simplified version of the microscopic model that maintains the distribution of connections between cells. Both models produce similar results when describing action potential propagation in homogeneous tissue; however, they slightly differ in the generation of ectopic beats in heterogeneous tissue. Nevertheless, both models present the generation of reentry inside fibrotic tissues. This kind of reentry restricted to microfibrosis regions can result in the formation of ectopic pacemakers, that is, regions that will generate a series of ectopic stimulus at a fast pacing rate. In turn, such activity has been related to trigger fibrillation in the atria and in the ventricles in clinical and animal studies. PMID:26583127

  13. Generating Discrete Power-Law Distributions from a Death- Multiple Immigration Population Process

    NASA Astrophysics Data System (ADS)

    Matthews, J. O.; Jakeman, E.; Hopcraft, K. I.

    2003-04-01

    We consider the evolution of a simple population process governed by deaths and multiple immigrations that arrive with rates particular to their order. For a particular choice of rates, the equilibrium solution has a discrete power-law form. The model is a generalization of a process investigated previously where immigrants arrived in pairs [1]. The general properties of this model are discussed in a companion paper. The population is initiated with precisely M individuals present and evolves to an equilibrium distribution with a power-law tail. However the power-law tails of the equilibrium distribution are established immediately, so that moments and correlation properties of the population are undefined for any non-zero time. The technique we develop to characterize this process utilizes external monitoring that counts the emigrants leaving the population in specified time intervals. This counting distribution also possesses a power-law tail for all sampling times and the resulting time series exhibits two features worthy of note, a large variation in the strength of the signal, reflecting the power-law PDF; and secondly, intermittency of the emissions. We show that counting with a detector of finite dynamic range regularizes naturally the fluctuations, in effect `clipping' the events. All previously undefined characteristics such as the mean, autocorrelation and probabilities to the first event and time between events are well defined and derived. These properties, although obtained by discarding much data, nevertheless possess embedded power-law regimes that characterize the population in a way that is analogous to box averaging determination of fractal-dimension.

  14. Root location in random trees: a polarity property of all sampling consistent phylogenetic models except one.

    PubMed

    Steel, Mike

    2012-10-01

    Neutral macroevolutionary models, such as the Yule model, give rise to a probability distribution on the set of discrete rooted binary trees over a given leaf set. Such models can provide a signal as to the approximate location of the root when only the unrooted phylogenetic tree is known, and this signal becomes relatively more significant as the number of leaves grows. In this short note, we show that among models that treat all taxa equally, and are sampling consistent (i.e. the distribution on trees is not affected by taxa yet to be included), all such models, except one (the so-called PDA model), convey some information as to the location of the ancestral root in an unrooted tree. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Simulated maximum likelihood method for estimating kinetic rates in gene expression.

    PubMed

    Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin

    2007-01-01

    Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.

  16. Method of conditional moments (MCM) for the Chemical Master Equation: a unified framework for the method of moments and hybrid stochastic-deterministic models.

    PubMed

    Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J

    2014-09-01

    The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.

  17. A new approach to geographic partitioning of probabilistic seismic hazard using seismic source distance with earthquake extreme and perceptibility statistics: an application to the southern Balkan region

    NASA Astrophysics Data System (ADS)

    Bayliss, T. J.

    2016-02-01

    The southeastern European cities of Sofia and Thessaloniki are explored as example site-specific scenarios by geographically zoning their individual localized seismic sources based on the highest probabilities of magnitude exceedance. This is with the aim of determining the major components contributing to each city's seismic hazard. Discrete contributions from the selected input earthquake catalogue are investigated to determine those areas that dominate each city's prevailing seismic hazard with respect to magnitude and source-to-site distance. This work is based on an earthquake catalogue developed and described in a previously published paper by the author and components of a magnitude probability density function. Binned magnitude and distance classes are defined using a joint magnitude-distance distribution. The prevailing seismicity to each city-as defined by a child data set extracted from the parent earthquake catalogue for each city considered-is divided into distinct constrained data bins of small discrete magnitude and source-to-site distance intervals. These are then used to describe seismic hazard in terms of uni-variate modal values; that is, M* and D* which are the modal magnitude and modal source-to-site distance in each city's local historical seismicity. This work highlights that Sofia's dominating seismic hazard-that is, the modal magnitudes possessing the highest probabilities of occurrence-is located in zones confined to two regions at 60-80 km and 170-180 km from this city, for magnitude intervals of 5.75-6.00 Mw and 6.00-6.25 Mw respectively. Similarly, Thessaloniki appears prone to highest levels of hazard over a wider epicentral distance interval, from 80 to 200 km in the moment magnitude range 6.00-6.25 Mw.

  18. Comparing neuronal spike trains with inhomogeneous Poisson distribution: evaluation procedure and experimental application in cases of cyclic activity.

    PubMed

    Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino

    2005-11-30

    A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.

  19. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    PubMed

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  20. The PIT-trap—A “model-free” bootstrap procedure for inference about regression models with discrete, multivariate responses

    PubMed Central

    Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071

  1. Novel branching particle method for tracking

    NASA Astrophysics Data System (ADS)

    Ballantyne, David J.; Chan, Hubert Y.; Kouritzin, Michael A.

    2000-07-01

    Particle approximations are used to track a maneuvering signal given only a noisy, corrupted sequence of observations, as are encountered in target tracking and surveillance. The signal exhibits nonlinearities that preclude the optimal use of a Kalman filter. It obeys a stochastic differential equation (SDE) in a seven-dimensional state space, one dimension of which is a discrete maneuver type. The maneuver type switches as a Markov chain and each maneuver identifies a unique SDE for the propagation of the remaining six state parameters. Observations are constructed at discrete time intervals by projecting a polygon corresponding to the target state onto two dimensions and incorporating the noise. A new branching particle filter is introduced and compared with two existing particle filters. The filters simulate a large number of independent particles, each of which moves with the stochastic law of the target. Particles are weighted, redistributed, or branched, depending on the method of filtering, based on their accordance with the current observation from the sequence. Each filter provides an approximated probability distribution of the target state given all back observations. All three particle filters converge to the exact conditional distribution as the number of particles goes to infinity, but differ in how well they perform with a finite number of particles. Using the exactly known ground truth, the root-mean-squared (RMS) errors in target position of the estimated distributions from the three filters are compared. The relative tracking power of the filters is quantified for this target at varying sizes, particle counts, and levels of observation noise.

  2. Optimized lower leg injury probability curves from post-mortem human subject tests under axial impacts

    PubMed Central

    Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko

    2015-01-01

    Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381

  3. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  4. Discrete breathers in graphane: Effect of temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baimova, J. A., E-mail: julia.a.baimova@gmail.com; Murzaev, R. T.; Lobzenko, I. P.

    The discrete breathers in graphane in thermodynamic equilibrium in the temperature range 50–600 K are studied by molecular dynamics simulation. A discrete breather is a hydrogen atom vibrating along the normal to a sheet of graphane at a high amplitude. As was found earlier, the lifetime of a discrete breather at zero temperature corresponds to several tens of thousands of vibrations. The effect of temperature on the decay time of discrete breathers and the probability of their detachment from a sheet of graphane are studied in this work. It is shown that closely spaced breathers can exchange energy with eachmore » other at zero temperature. The data obtained suggest that thermally activated discrete breathers can be involved in the dehydrogenation of graphane, which is important for hydrogen energetics.« less

  5. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    PubMed

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  7. Stable laws and cosmic ray physics

    NASA Astrophysics Data System (ADS)

    Genolini, Y.; Salati, P.; Serpico, P. D.; Taillet, R.

    2017-04-01

    Context. In the new "precision era" for cosmic ray astrophysics, scientists making theoretical predictions cannot content themselves with average trends, but need to correctly take into account intrinsic uncertainties. The space-time discreteness of the cosmic ray sources, together with a substantial ignorance of their precise epochs and locations (with the possible exception of the most recent and close ones) play an important role in this sense. Aims: We elaborate a statistical theory to deal with this problem, relating the composite probability P(Ψ) to obtain a flux Ψ at the Earth and the single-source probability p(ψ) to contribute with a flux ψ. The main difficulty arises from the fact that p(ψ) is a "heavy tail" distribution, characterized by power-law or broken power-law behavior up to very large fluxes, for which the central limit theorem does not hold, and leading to distributions different from Gaussian. The functional form of the distribution for the aggregated flux is nonetheless unchanged by its own convolution, that is, it belongs to the so-called stable laws class. Methods: We analytically discuss the regime of validity of the stable laws associated with the distributions arising in cosmic ray astrophysics, as well as the limitations to the treatment imposed by causal considerations and partial source catalog knowledge. We validate our results with extensive Monte Carlo simulations, for different regimes of propagation parameters and energies. Results: We find that relatively simple recipes provide a satisfactory description of the probability P(Ψ). We also find that a naive Gaussian fit to simulation results would underestimate the probability of very large fluxes, that is, several times above the average, while overestimating the probability of relatively milder excursions. At large energies, large flux fluctuations are prevented by causal considerations, while at low energies, a partial knowledge of the recent and nearby population of sources plays an important role. A few proposals have been recently discussed in the literature to account for spectral breaks reported in cosmic ray data in terms of local contributions. We apply our newly developed theory to assess their probabilities, finding that they are relatively small, typically at the 0.1% level or smaller, never exceeding 1%. Conclusions: The use of heavy tail distributions is relevant in assessing how likely a measured cosmic ray flux is to depart from the average expectation in a given model. The existing mathematical theory leading to stable laws can be adapted to the case of interest via some recipes that closely reproduce numerical simulations and are relatively easy to implement.

  8. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    PubMed

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  10. Exact calculation of distributions on integers, with application to sequence alignment.

    PubMed

    Newberg, Lee A; Lawrence, Charles E

    2009-01-01

    Computational biology is replete with high-dimensional discrete prediction and inference problems. Dynamic programming recursions can be applied to several of the most important of these, including sequence alignment, RNA secondary-structure prediction, phylogenetic inference, and motif finding. In these problems, attention is frequently focused on some scalar quantity of interest, a score, such as an alignment score or the free energy of an RNA secondary structure. In many cases, score is naturally defined on integers, such as a count of the number of pairing differences between two sequence alignments, or else an integer score has been adopted for computational reasons, such as in the test of significance of motif scores. The probability distribution of the score under an appropriate probabilistic model is of interest, such as in tests of significance of motif scores, or in calculation of Bayesian confidence limits around an alignment. Here we present three algorithms for calculating the exact distribution of a score of this type; then, in the context of pairwise local sequence alignments, we apply the approach so as to find the alignment score distribution and Bayesian confidence limits.

  11. ENSO Dynamics and Trends, AN Alternate View

    NASA Astrophysics Data System (ADS)

    Rojo Hernandez, J. D.; Lall, U.; Mesa, O. J.

    2017-12-01

    El Niño - Southern Oscillation (ENSO) is the most important inter-annual climate fluctuation on a planetary level with great effects on the hydrological cycle, agriculture, ecosystems, health and society. This work demonstrates the use of the Non-Homogeneus hidden Markov Models (NHMM) to characterize ENSO using a set of discrete states with variable transition probabilities matrix using the data of sea surface temperature anomalies (SSTA) of the Kaplan Extended SST v2 between 120E -90W, 15N-15S from Jan-1856 to Dec-2016. ENSO spatial patterns, their temporal distribution, the transition probabilities between patterns and their temporal evolution are the main results of the NHHMM applied to ENSO. The five "hidden" states found appear to represent the different "Flavors" described in the literature: the Canonical El Niño, Central El Niño, a Neutral state, Central La Niña and the Canonical Niña. Using the whole record length of the SSTA it was possible to identify trends in the dynamic system, with a decrease in the probability of occurrence of the cold events and a significant increase of the warm events, in particular of Central El Niño events whose probability of occurrence has increased Dramatically since 1960 coupled with increases in global temperature.

  12. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  13. Rheology of U-Shaped Granular Particles

    NASA Astrophysics Data System (ADS)

    Hill, Matthew; Franklin, Scott

    We study the response of cylindrical samples of U-shaped granular particles (staples) to extensional loads. Samples elongate in discrete bursts (events) corresponding to particles rearranging and re-entangling. Previous research on samples of constant cross-sectional area found a Weibullian weakest-link theory could explain the distribution of yield points. We now vary the cross-sectional area, and find that the maximum yield pressure (force/area) is a function of particle number density and independent of area. The probability distribution function of important event characteristics -- the stress increase before an event and stress released during an event -- both fall of inversely with magnitude, reminiscent of avalanche dynamics. Fourier transforms of the fluctuating force (or stress) scales inversely with frequency, suggesting dry friction plays a role in the rearrangements. Finally, there is some evidence that dynamics are sensitive to the stiffness of the tensile testing machine, although an explanation for this behavior is unknown.

  14. General simulation algorithm for autocorrelated binary processes.

    PubMed

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  15. Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks

    PubMed Central

    Thomas, Niclas; Matejovicova, Lenka; Srikusalanukul, Wichat; Shawe-Taylor, John; Chain, Benny

    2012-01-01

    Naive T lymphocytes exhibit extensive antigen-independent recirculation between blood and lymph nodes, where they may encounter dendritic cells carrying cognate antigen. We examine how long different T cells may spend in an individual lymph node by examining data from long term cannulation of blood and efferent lymphatics of a single lymph node in the sheep. We determine empirically the distribution of transit times of migrating T cells by applying the Least Absolute Shrinkage & Selection Operator () or regularised to fit experimental data describing the proportion of labelled infused cells in blood and efferent lymphatics over time. The optimal inferred solution reveals a distribution with high variance and strong skew. The mode transit time is typically between 10 and 20 hours, but a significant number of cells spend more than 70 hours before exiting. We complement the empirical machine learning based approach by modelling lymphocyte passage through the lymph node . On the basis of previous two photon analysis of lymphocyte movement, we optimised distributions which describe the transit times (first passage times) of discrete one dimensional and continuous (Brownian) three dimensional random walks with drift. The optimal fit is obtained when drift is small, i.e. the ratio of probabilities of migrating forward and backward within the node is close to one. These distributions are qualitatively similar to the inferred empirical distribution, with high variance and strong skew. In contrast, an optimised normal distribution of transit times (symmetrical around mean) fitted the data poorly. The results demonstrate that the rapid recirculation of lymphocytes observed at a macro level is compatible with predominantly randomised movement within lymph nodes, and significant probabilities of long transit times. We discuss how this pattern of migration may contribute to facilitating interactions between low frequency T cells and antigen presenting cells carrying cognate antigen. PMID:23028891

  16. Human dynamics scaling characteristics for aerial inbound logistics operation

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Guo, Jin-Li

    2010-05-01

    In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.

  17. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    PubMed

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation.

    PubMed

    Leverrier, Anthony; Grangier, Philippe

    2009-05-08

    We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.

  19. Discrete geometric analysis of message passing algorithm on graphs

    NASA Astrophysics Data System (ADS)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  20. SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield different answers for the HFE. The HFE maximum grossly over estimates the HFE, while the HFE distribution occurs less than HFE median, and greater than HFE average. Conclusions: Dynamic task modeling can be perused through the framework of SPAR-H. Identification of distributions associated with each PSF needs to be defined, and may change depending upon the scenario. However it is very unlikely that each PSF level is equally likely as the resulting HEP distribution is strongly centered at 100%, which is unrealistic. Other distributions may need to be identified for PSFs, to facilitate the transition to dynamic task modeling. Additionally discrete distributions need to be exchanged for continuous so that simulations for the HFE can further advance. This paper provides a method to explore dynamic subtask to task translation and provides examples of the process using the SPAR-H method.« less

  1. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  2. Discrete-time systems with random switches: From systems stability to networks synchronization.

    PubMed

    Guo, Yao; Lin, Wei; Ho, Daniel W C

    2016-03-01

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.

  3. Discrete-time systems with random switches: From systems stability to networks synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433

    2016-03-15

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less

  4. Ecological Genomics of Marine Picocyanobacteria†

    PubMed Central

    Scanlan, D. J.; Ostrowski, M.; Mazard, S.; Dufresne, A.; Garczarek, L.; Hess, W. R.; Post, A. F.; Hagemann, M.; Paulsen, I.; Partensky, F.

    2009-01-01

    Summary: Marine picocyanobacteria of the genera Prochlorococcus and Synechococcus numerically dominate the picophytoplankton of the world ocean, making a key contribution to global primary production. Prochlorococcus was isolated around 20 years ago and is probably the most abundant photosynthetic organism on Earth. The genus comprises specific ecotypes which are phylogenetically distinct and differ markedly in their photophysiology, allowing growth over a broad range of light and nutrient conditions within the 45°N to 40°S latitudinal belt that they occupy. Synechococcus and Prochlorococcus are closely related, together forming a discrete picophytoplankton clade, but are distinguishable by their possession of dissimilar light-harvesting apparatuses and differences in cell size and elemental composition. Synechococcus strains have a ubiquitous oceanic distribution compared to that of Prochlorococcus strains and are characterized by phylogenetically discrete lineages with a wide range of pigmentation. In this review, we put our current knowledge of marine picocyanobacterial genomics into an environmental context and present previously unpublished genomic information arising from extensive genomic comparisons in order to provide insights into the adaptations of these marine microbes to their environment and how they are reflected at the genomic level. PMID:19487728

  5. Data Base for a National Mineral-Resource Assessment of Undiscovered Deposits of Gold, Silver, Copper, Lead, and Zinc in the Conterminous United States

    USGS Publications Warehouse

    Ludington, S.D.; Cox, D.P.; McCammon, R.B.

    1996-01-01

    For this assessment, the conterminous United States was divided into 12 regions Adirondack Mountains, Central and Southern Rocky Mountains, Colorado Plateau, East Central, Great Basin, Great Plains, Lake Superior, Northern Appalachians, Northern Rocky Mountains, Pacific Coast, Southern Appalachians, and Southern Basin and Range. The assessment, which was conducted by regional assessment teams of scientists from the USGS, was based on the concepts of permissive tracts and deposit models. Permissive tracts are discrete areas of the United States for which estimates of numbers of undiscovered deposits of a particular deposit type were made. A permissive tract is defined by its geographic boundaries such that the probability of deposits of the type delineated occurring outside the boundary is neglible. Deposit models, which are based on a compilation of worldwide literature and on observation, are sets of data in a convenient form that describe a group of deposits which have similar characteristics and that contain information on the common geologic attributes of the deposits and the environments in which they are found. Within each region, the assessment teams delineated permissive tracts for those deposit models that were judged to be appropriate and, when the amount of information warranted, estimated the number of undiscovered deposits. A total of 46 deposit models were used to assess 236 separate permissive tracts. Estimates of undiscovered deposits were limited to a depth of 1 km beneath the surface of the Earth. The estimates of the number of undiscovered deposits of gold, silver, copper, lead, and zinc were expressed in the form of a probability distribution. Commonly, the number of undiscovered deposits was estimated at the 90th, 50th, and 10th percentiles. A Monte Carlo simulation computer program was used to combine the probability distribution of the number of undiscovered deposits with the grade and tonnage data sets associated with each deposit model to obtain the probability distribution for undiscovered metal.

  6. Quantum Walks on the Line with Phase Parameters

    NASA Astrophysics Data System (ADS)

    Villagra, Marcos; Nakanishi, Masaki; Yamashita, Shigeru; Nakashima, Yasuhiko

    In this paper, a study on discrete-time coined quantum walks on the line is presented. Clear mathematical foundations are still lacking for this quantum walk model. As a step toward this objective, the following question is being addressed: Given a graph, what is the probability that a quantum walk arrives at a given vertex after some number of steps? This is a very natural question, and for random walks it can be answered by several different combinatorial arguments. For quantum walks this is a highly non-trivial task. Furthermore, this was only achieved before for one specific coin operator (Hadamard operator) for walks on the line. Even considering only walks on lines, generalizing these computations to a general SU(2) coin operator is a complex task. The main contribution is a closed-form formula for the amplitudes of the state of the walk (which includes the question above) for a general symmetric SU(2) operator for walks on the line. To this end, a coin operator with parameters that alters the phase of the state of the walk is defined. Then, closed-form solutions are computed by means of Fourier analysis and asymptotic approximation methods. We also present some basic properties of the walk which can be deducted using weak convergence theorems for quantum walks. In particular, the support of the induced probability distribution of the walk is calculated. Then, it is shown how changing the parameters in the coin operator affects the resulting probability distribution.

  7. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  8. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  9. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  10. Logistic and linear regression model documentation for statistical relations between continuous real-time and discrete water-quality constituents in the Kansas River, Kansas, July 2012 through June 2015

    USGS Publications Warehouse

    Foster, Guy M.; Graham, Jennifer L.

    2016-04-06

    The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes in water-quality conditions through time, characterizing potentially harmful cyanobacterial events, and indicating changes in water-quality conditions that may affect drinking-water treatment processes.

  11. On the evolution of specialization with a mechanistic underpinning in structured metapopulations.

    PubMed

    Nurmi, Tuomas; Parvinen, Kalle

    2008-03-01

    We analyze the evolution of specialization in resource utilization in a discrete-time metapopulation model using the adaptive dynamics approach. The local dynamics in the metapopulation are based on the Beverton-Holt model with mechanistic underpinnings. The consumer faces a trade-off in the abilities to consume two resources that are spatially heterogeneously distributed to patches that are prone to local catastrophes. We explore the factors favoring the spread of generalist or specialist strategies. Increasing fecundity or decreasing catastrophe probability favors the spread of the generalist strategy and increasing environmental heterogeneity enlarges the parameter domain where the evolutionary branching is possible. When there are no catastrophes, increasing emigration diminishes the parameter domain where the evolutionary branching may occur. Otherwise, the effect of emigration on evolutionary dynamics is non-monotonous: both small and large values of emigration probability favor the spread of the specialist strategies whereas the parameter domain where evolutionary branching may occur is largest when the emigration probability has intermediate values. We compare how different forms of spatial heterogeneity and different models of local growth affect the evolutionary dynamics. We show that even small changes in the resource dynamics may have outstanding evolutionary effects to the consumers.

  12. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  13. Effective Hamiltonian for travelling discrete breathers

    NASA Astrophysics Data System (ADS)

    MacKay, Robert S.; Sepulchre, Jacques-Alexandre

    2002-05-01

    Hamiltonian chains of oscillators in general probably do not sustain exact travelling discrete breathers. However solutions which look like moving discrete breathers for some time are not difficult to observe in numerics. In this paper we propose an abstract framework for the description of approximate travelling discrete breathers in Hamiltonian chains of oscillators. The method is based on the construction of an effective Hamiltonian enabling one to describe the dynamics of the translation degree of freedom of moving breathers. Error estimate on the approximate dynamics is also studied. The concept of the Peierls-Nabarro barrier can be made clear in this framework. We illustrate the method with two simple examples, namely the Salerno model which interpolates between the Ablowitz-Ladik lattice and the discrete nonlinear Schrödinger system, and the Fermi-Pasta-Ulam chain.

  14. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Probability in Action: The Red Traffic Light

    ERIC Educational Resources Information Center

    Shanks, John A.

    2007-01-01

    Emphasis on problem solving in mathematics has gained considerable attention in recent years. While statistics teaching has always been problem driven, the same cannot be said for the teaching of probability where discrete examples involving coins and playing cards are often the norm. This article describes an application of simple probability…

  16. Effect of the surface charge discretization on electric double layers: a Monte Carlo simulation study.

    PubMed

    Madurga, Sergio; Martín-Molina, Alberto; Vilaseca, Eudald; Mas, Francesc; Quesada-Pérez, Manuel

    2007-06-21

    The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups, a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.

  17. Novel formulation of the ℳ model through the Generalized-K distribution for atmospheric optical channels.

    PubMed

    Garrido-Balsells, José María; Jurado-Navas, Antonio; Paris, José Francisco; Castillo-Vazquez, Miguel; Puerta-Notario, Antonio

    2015-03-09

    In this paper, a novel and deeper physical interpretation on the recently published Málaga or ℳ statistical distribution is provided. This distribution, which is having a wide acceptance by the scientific community, models the optical irradiance scintillation induced by the atmospheric turbulence. Here, the analytical expressions previously published are modified in order to express them by a mixture of the known Generalized-K and discrete Binomial and Negative Binomial distributions. In particular, the probability density function (pdf) of the ℳ model is now obtained as a linear combination of these Generalized-K pdf, in which the coefficients depend directly on the parameters of the ℳ distribution. In this way, the Málaga model can be physically interpreted as a superposition of different optical sub-channels each of them described by the corresponding Generalized-K fading model and weighted by the ℳ dependent coefficients. The expressions here proposed are simpler than the equations of the original ℳ model and are validated by means of numerical simulations by generating ℳ -distributed random sequences and their associated histogram. This novel interpretation of the Málaga statistical distribution provides a valuable tool for analyzing the performance of atmospheric optical channels for every turbulence condition.

  18. The Physical Mechanism for Retinal Discrete Dark Noise: Thermal Activation or Cellular Ultraweak Photon Emission?

    PubMed

    Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack

    2016-01-01

    For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to 'internal photons' inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350-700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation.

  19. X-33 Hypersonic Boundary Layer Transition

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Horvath, Thomas J.; Hollis, Brian R.; Thompson, Richard A.; Hamilton, H. Harris, II

    1999-01-01

    Boundary layer and aeroheating characteristics of several X-33 configurations have been experimentally examined in the Langley 20-Inch Mach 6 Air Tunnel. Global surface heat transfer distributions, surface streamline patterns, and shock shapes were measured on 0.013-scale models at Mach 6 in air. Parametric variations include angles-of-attack of 20-deg, 30-deg, and 40-deg; Reynolds numbers based on model length of 0.9 to 6.6 million; and body-flap deflections of 0, 10 and 20-deg. The effects of discrete and distributed roughness elements on boundary layer transition, which included trip height, size, location, and distribution, both on and off the windward centerline, were investigated. The discrete roughness results on centerline were used to provide a transition correlation for the X-33 flight vehicle that was applicable across the range of reentry angles of attack. The attachment line discrete roughness results were shown to be consistent with the centerline results, as no increased sensitivity to roughness along the attachment line was identified. The effect of bowed panels was qualitatively shown to be less effective than the discrete trips; however, the distributed nature of the bowed panels affected a larger percent of the aft-body windward surface than a single discrete trip.

  20. Force Transmission Modes of Non-Cohesive and Cohesive Materials at the Critical State.

    PubMed

    Wang, Ji-Peng

    2017-08-31

    This paper investigates the force transmission modes, mainly described by probability density distributions, in non-cohesive dry and cohesive wet granular materials by discrete element modeling. The critical state force transmission patterns are focused on with the contact model effect being analyzed. By shearing relatively dense and loose dry specimens to the critical state in the conventional triaxial loading path, it is observed that there is a unique critical state force transmission mode. There is a universe critical state force distribution pattern for both the normal contact forces and tangential contact forces. Furthermore, it is found that using either the linear Hooke or the non-linear Hertz model does not affect the universe force transmission mode, and it is only related to the grain size distribution. Wet granular materials are also simulated by incorporating a water bridge model. Dense and loose wet granular materials are tested, and the critical state behavior for the wet material is also observed. The critical state strength and void ratio of wet granular materials are higher than those of a non-cohesive material. The critical state inter-particle distribution is altered from that of a non-cohesive material with higher probability in relatively weak forces. Grains in non-cohesive materials are under compressive stresses, and their principal directions are mainly in the axial loading direction. However, for cohesive wet granular materials, some particles are in tension, and the tensile stresses are in the horizontal direction on which the confinement is applied. The additional confinement by the tensile stress explains the macro strength and dilatancy increase in wet samples.

  1. Force Transmission Modes of Non-Cohesive and Cohesive Materials at the Critical State

    PubMed Central

    2017-01-01

    This paper investigates the force transmission modes, mainly described by probability density distributions, in non-cohesive dry and cohesive wet granular materials by discrete element modeling. The critical state force transmission patterns are focused on with the contact model effect being analyzed. By shearing relatively dense and loose dry specimens to the critical state in the conventional triaxial loading path, it is observed that there is a unique critical state force transmission mode. There is a universe critical state force distribution pattern for both the normal contact forces and tangential contact forces. Furthermore, it is found that using either the linear Hooke or the non-linear Hertz model does not affect the universe force transmission mode, and it is only related to the grain size distribution. Wet granular materials are also simulated by incorporating a water bridge model. Dense and loose wet granular materials are tested, and the critical state behavior for the wet material is also observed. The critical state strength and void ratio of wet granular materials are higher than those of a non-cohesive material. The critical state inter-particle distribution is altered from that of a non-cohesive material with higher probability in relatively weak forces. Grains in non-cohesive materials are under compressive stresses, and their principal directions are mainly in the axial loading direction. However, for cohesive wet granular materials, some particles are in tension, and the tensile stresses are in the horizontal direction on which the confinement is applied. The additional confinement by the tensile stress explains the macro strength and dilatancy increase in wet samples. PMID:28858238

  2. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  3. Landau-Zener transitions and Dykhne formula in a simple continuum model

    NASA Astrophysics Data System (ADS)

    Dunham, Yujin; Garmon, Savannah

    The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.

  4. A distribution method for analysing the baseline of pulsatile endocrine signals as exemplified by 24-hour growth hormone profiles.

    PubMed

    Matthews, D R; Hindmarsh, P C; Pringle, P J; Brook, C G

    1991-09-01

    To develop a method for quantifying the distribution of concentrations present in hormone profiles, which would allow an observer-unbiased estimate of the time concentration attribute and to make an assessment of the baseline. The log-transformed concentrations (regardless of their temporal attribute) are sorted and allocated to class intervals. The number of observations in each interval are then determined and expressed as a percentage of the total number of samples drawn in the study period. The data may be displayed as a frequency distribution or as a cumulative distribution. Cumulative distributions may be plotted as sigmoidal ogives or can be transformed into discrete probabilities (linear probits), which are then linear, and amenable to regression analysis. Probability analysis gives estimates of the mean (the value below which 50% of the observed concentrations lie, which we term 'OC50'). 'Baseline' can be defined in terms of percentage occupancy--the 'Observed Concentration for 5%' (which we term 'OC5') which is the threshold at or below which the hormone concentrations are measured 5% of the time. We report the use of applying this method to 24-hour growth hormone (GH) profiles from 63 children, 26 adults and one giant. We demonstrate that GH effects (growth or gigantism) in these groups are more related to the baseline OC5 concentration than peak concentration (OC5 +/- 95% confidence limits: adults 0.05 +/- 0.04, peak-height-velocity pubertal 0.39 +/- 0.22, giant 8.9 mU/l). Pulsatile hormone profiles can be analysed using this method in order to assess baseline and other concentration domains.

  5. Multi-Aperture Digital Coherent Combining for Free-Space Optical Communication Receivers

    DTIC Science & Technology

    2016-04-21

    Distribution A: Public Release; unlimited distribution 2016 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.2025) Discrete ...Coherent combining algorithm Multi-aperture coherent combining enables using many discrete apertures together to create a large effective aperture. A

  6. A fuzzy discrete harmony search algorithm applied to annual cost reduction in radial distribution systems

    NASA Astrophysics Data System (ADS)

    Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza

    2016-09-01

    Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.

  7. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  8. Discrete-time moment closure models for epidemic spreading in populations of interacting individuals.

    PubMed

    Frasca, Mattia; Sharkey, Kieran J

    2016-06-21

    Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. A Numerical and Theoretical Study of Seismic Wave Diffraction in Complex Geologic Structure

    DTIC Science & Technology

    1989-04-14

    element methods for analyzing linear and nonlinear seismic effects in the surficial geologies relevant to several Air Force missions. The second...exact solution evaluated here indicates that edge-diffracted seismic wave fields calculated by discrete numerical methods probably exhibits significant...study is to demonstrate and validate some discrete numerical methods essential for analyzing linear and nonlinear seismic effects in the surficial

  10. Does Age of Entrance Affect Community College Completion Probabilities? Evidence from a Discrete-Time Hazard Model

    ERIC Educational Resources Information Center

    Calcagno, Juan Carlos; Crosta, Peter; Bailey, Thomas; Jenkins, Davis

    2007-01-01

    Research has consistently shown that older students--those who enter college for the first time at age 25 or older--are less likely to complete a degree or certificate. The authors estimate a single-risk discrete-time hazard model using transcript data on a cohort of first-time community college students in Florida to compare the educational…

  11. Statistics of Infima and Stopping Times of Entropy Production and Applications to Active Molecular Processes

    NASA Astrophysics Data System (ADS)

    Neri, Izaak; Roldán, Édgar; Jülicher, Frank

    2017-01-01

    We study the statistics of infima, stopping times, and passage probabilities of entropy production in nonequilibrium steady states, and we show that they are universal. We consider two examples of stopping times: first-passage times of entropy production and waiting times of stochastic processes, which are the times when a system reaches a given state for the first time. Our main results are as follows: (i) The distribution of the global infimum of entropy production is exponential with mean equal to minus Boltzmann's constant; (ii) we find exact expressions for the passage probabilities of entropy production; (iii) we derive a fluctuation theorem for stopping-time distributions of entropy production. These results have interesting implications for stochastic processes that can be discussed in simple colloidal systems and in active molecular processes. In particular, we show that the timing and statistics of discrete chemical transitions of molecular processes, such as the steps of molecular motors, are governed by the statistics of entropy production. We also show that the extreme-value statistics of active molecular processes are governed by entropy production; for example, we derive a relation between the maximal excursion of a molecular motor against the direction of an external force and the infimum of the corresponding entropy-production fluctuations. Using this relation, we make predictions for the distribution of the maximum backtrack depth of RNA polymerases, which follow from our universal results for entropy-production infima.

  12. Stability analysis for virus spreading in complex networks with quarantine and non-homogeneous transition rates

    NASA Astrophysics Data System (ADS)

    Alarcon-Ramos, L. A.; Schaum, A.; Rodríguez Lucatero, C.; Bernal Jaquez, R.

    2014-03-01

    Virus propagations in complex networks have been studied in the framework of discrete time Markov process dynamical systems. These studies have been carried out under the assumption of homogeneous transition rates, yielding conditions for virus extinction in terms of the transition probabilities and the largest eigenvalue of the connectivity matrix. Nevertheless the assumption of homogeneous rates is rather restrictive. In the present study we consider non-homogeneous transition rates, assigned according to a uniform distribution, with susceptible, infected and quarantine states, thus generalizing the previous studies. A remarkable result of this analysis is that the extinction depends on the weakest element in the network. Simulation results are presented for large free-scale networks, that corroborate our theoretical findings.

  13. Testing for entanglement with periodic coarse graining

    NASA Astrophysics Data System (ADS)

    Tasca, D. S.; Rudnicki, Łukasz; Aspden, R. S.; Padgett, M. J.; Souto Ribeiro, P. H.; Walborn, S. P.

    2018-04-01

    Continuous-variable systems find valuable applications in quantum information processing. To deal with an infinite-dimensional Hilbert space, one in general has to handle large numbers of discretized measurements in tasks such as entanglement detection. Here we employ the continuous transverse spatial variables of photon pairs to experimentally demonstrate entanglement criteria based on a periodic structure of coarse-grained measurements. The periodization of the measurements allows an efficient evaluation of entanglement using spatial masks acting as mode analyzers over the entire transverse field distribution of the photons and without the need to reconstruct the probability densities of the conjugate continuous variables. Our experimental results demonstrate the utility of the derived criteria with a success rate in entanglement detection of ˜60 % relative to 7344 studied cases.

  14. The development of a simulation model of primary prevention strategies for coronary heart disease.

    PubMed

    Babad, Hannah; Sanderson, Colin; Naidoo, Bhash; White, Ian; Wang, Duolao

    2002-11-01

    This paper describes the present state of development of a discrete-event micro-simulation model for coronary heart disease prevention. The model is intended to support health policy makers in assessing the impacts on health care resources of different primary prevention strategies. For each person, a set of times to disease events, conditional on the individual's risk factor profile, is sampled from a set of probability distributions that are derived from a new analysis of the Framingham cohort study on coronary heart disease. Methods used to model changes in behavioural and physiological risk factors are discussed and a description of the simulation logic is given. The model incorporates POST (Patient Oriented Simulation Technique) simulation routines.

  15. A method of fitting the gravity model based on the Poisson distribution.

    PubMed

    Flowerdew, R; Aitkin, M

    1982-05-01

    "In this paper, [the authors] suggest an alternative method for fitting the gravity model. In this method, the interaction variable is treated as the outcome of a discrete probability process, whose mean is a function of the size and distance variables. This treatment seems appropriate when the dependent variable represents a count of the number of items (people, vehicles, shipments) moving from one place to another. It would seem to have special advantages where there are some pairs of places between which few items move. The argument will be illustrated with reference to data on the numbers of migrants moving in 1970-1971 between pairs of the 126 labor market areas defined for Great Britain...." excerpt

  16. Combinatoric analysis of heterogeneous stochastic self-assembly.

    PubMed

    D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom

    2013-09-28

    We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.

  17. Donders revisited: Discrete or continuous temporal processing underlying reaction time distributions?

    PubMed

    Bao, Yan; Yang, Taoxi; Lin, Xiaoxiong; Pöppel, Ernst

    2016-09-01

    Differences of reaction times to specific stimulus configurations are used as indicators of cognitive processing stages. In this classical experimental paradigm, continuous temporal processing is implicitly assumed. Multimodal response distributions indicate, however, discrete time sampling, which is often masked by experimental conditions. Differences in reaction times reflect discrete temporal mechanisms that are pre-semantically implemented and suggested to be based on entrained neural oscillations. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  18. Self-narrowing of size distributions of nanostructures by nucleation antibunching

    NASA Astrophysics Data System (ADS)

    Glas, Frank; Dubrovskii, Vladimir G.

    2017-08-01

    We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.

  19. Log-gamma directed polymer with fixed endpoints via the replica Bethe Ansatz

    NASA Astrophysics Data System (ADS)

    Thiery, Thimothée; Le Doussal, Pierre

    2014-10-01

    We study the model of a discrete directed polymer (DP) on a square lattice with homogeneous inverse gamma distribution of site random Boltzmann weights, introduced by Seppalainen (2012 Ann. Probab. 40 19-73). The integer moments of the partition sum, \\overline{Z^n} , are studied using a transfer matrix formulation, which appears as a generalization of the Lieb-Liniger quantum mechanics of bosons to discrete time and space. In the present case of the inverse gamma distribution the model is integrable in terms of a coordinate Bethe Ansatz, as discovered by Brunet. Using the Brunet-Bethe eigenstates we obtain an exact expression for the integer moments of \\overline{Z^n} for polymers of arbitrary lengths and fixed endpoint positions. Although these moments do not exist for all integer n, we are nevertheless able to construct a generating function which reproduces all existing integer moments and which takes the form of a Fredholm determinant (FD). This suggests an analytic continuation via a Mellin-Barnes transform and we thereby propose a FD ansatz representation for the probability distribution function (PDF) of Z and its Laplace transform. In the limit of a very long DP, this ansatz yields that the distribution of the free energy converges to the Gaussian unitary ensemble (GUE) Tracy-Widom distribution up to a non-trivial average and variance that we calculate. Our asymptotic predictions coincide with a result by Borodin et al (2013 Commun. Math. Phys. 324 215-32) based on a formula obtained by Corwin et al (2011 arXiv:1110.3489) using the geometric Robinson-Schensted-Knuth (gRSK) correspondence. In addition we obtain the dependence on the endpoint position and the exact elastic coefficient at a large time. We argue the equivalence between our formula and that of Borodin et al. As we will discuss, this provides a connection between quantum integrability and tropical combinatorics.

  20. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  1. Broadband Time-Frequency Analysis Using a Multicomputer

    DTIC Science & Technology

    2004-09-30

    FFT 512 pt Waterfall WVD display 8© 2004 Mercury Computer Systems, Inc. Smoothed Pseudo Wigner - Ville Distribution One of many interference reduction...The Wigner - Ville distribution , the scalogram, and the discrete Gabor transform are among the most well-known of these methods. Due to specific...based upon FFT Accumulation Method • Continuous Wavelet Transform (Scalogram) • Discrete Wigner - Ville Distribution with a selected set of interference

  2. Development of spatial-temporal ventilation heterogeneity and probability analysis tools for hyperpolarized 3He magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Choy, S.; Ahmed, H.; Wheatley, A.; McCormack, D. G.; Parraga, G.

    2010-03-01

    We developed image analysis tools to evaluate spatial and temporal 3He magnetic resonance imaging (MRI) ventilation in asthma and cystic fibrosis. We also developed temporal ventilation probability maps to provide a way to describe and quantify ventilation heterogeneity over time, as a way to test respiratory exacerbations or treatment predictions and to provide a discrete probability measurement of 3He ventilation defect persistence.

  3. Extrapolating regional probability of drying of headwater streams using discrete observations and gauging networks

    NASA Astrophysics Data System (ADS)

    Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric

    2018-05-01

    Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.

  4. Controlling the Shannon Entropy of Quantum Systems

    PubMed Central

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  5. Controlling the shannon entropy of quantum systems.

    PubMed

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  6. Avalanches and power-law behaviour in lung inflation

    NASA Astrophysics Data System (ADS)

    Suki, Béla; Barabási, Albert-László; Hantos, Zoltán; Peták, Ferenc; Stanley, H. Eugene

    1994-04-01

    WHEN lungs are emptied during exhalation, peripheral airways close up1. For people with lung disease, they may not reopen for a significant portion of inhalation, impairing gas exchange2,3. A knowledge of the mechanisms that govern reinflation of collapsed regions of lungs is therefore central to the development of ventilation strategies for combating respiratory problems. Here we report measurements of the terminal airway resistance, Rt , during the opening of isolated dog lungs. When inflated by a constant flow, Rt decreases in discrete jumps. We find that the probability distribution of the sizes of the jumps and of the time intervals between them exhibit power-law behaviour over two decades. We develop a model of the inflation process in which 'avalanches' of airway openings are seen-with power-law distributions of both the size of avalanches and the time intervals between them-which agree quantitatively with those seen experimentally, and are reminiscent of the power-law behaviour observed for self-organized critical systems4. Thus power-law distributions, arising from avalanches associated with threshold phenomena propagating down a branching tree structure, appear to govern the recruitment of terminal airspaces.

  7. Josephson junction in the quantum mesoscopic electric circuits with charge discreteness

    NASA Astrophysics Data System (ADS)

    Pahlavani, H.

    2018-04-01

    A quantum mesoscopic electrical LC-circuit with charge discreteness including a Josephson junction is considered and a nonlinear Hamiltonian that describing the dynamic of such circuit is introduced. The quantum dynamical behavior (persistent current probability) is studied in the charge and phase regimes by numerical solution approaches. The time evolution of charge and current, number-difference and the bosonic phase and also the energy spectrum of a quantum mesoscopic electric LC-circuit with charge discreteness that coupled with a Josephson junction device are investigated. We show the role of the coupling energy and the electrostatic Coulomb energy of the Josephson junction in description of the quantum behavior and the spectral properties of a quantum mesoscopic electrical LC-circuits with charge discreteness.

  8. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  9. Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.

    PubMed

    Oettinger, D; Mendoza, M; Herrmann, H J

    2013-07-01

    We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.

  10. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  11. Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition.

    PubMed

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-08-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.

  12. Statistical characterization of discrete conservative systems: The web map

    NASA Astrophysics Data System (ADS)

    Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino

    2017-10-01

    We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.

  13. Discrete photon statistics from continuous microwave measurements

    NASA Astrophysics Data System (ADS)

    Virally, Stéphane; Simoneau, Jean Olivier; Lupien, Christian; Reulet, Bertrand

    2016-04-01

    Photocount statistics are an important tool for the characterization of electromagnetic fields, especially for fields with an irrelevant phase. In the microwave domain, continuous rather than discrete measurements are the norm. Using a different approach, we recover discrete photon statistics from the cumulants of a continuous distribution of field quadrature measurements. The use of cumulants allows the separation between the signal of interest and experimental noise. Using a parametric amplifier as the first stage of the amplification chain, we extract useful data from up to the sixth cumulant of the continuous distribution of a coherent field, hence recovering up to the third moment of the discrete statistics associated with a signal with much less than one average photon.

  14. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    PubMed

    Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  15. Estimating Distribution of Hidden Objects with Drones: From Tennis Balls to Manatees

    PubMed Central

    Martin, Julien; Edwards, Holly H.; Burgess, Matthew A.; Percival, H. Franklin; Fagan, Daniel E.; Gardner, Beth E.; Ortega-Ortiz, Joel G.; Ifju, Peter G.; Evers, Brandon S.; Rambo, Thomas J.

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants. PMID:22761712

  16. The Physical Mechanism for Retinal Discrete Dark Noise: Thermal Activation or Cellular Ultraweak Photon Emission?

    PubMed Central

    Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack

    2016-01-01

    For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to ‘internal photons’ inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350–700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation. PMID:26950936

  17. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  18. Discrete stochastic analogs of Erlang epidemic models.

    PubMed

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  19. StratBAM: A Discrete-Event Simulation Model to Support Strategic Hospital Bed Capacity Decisions.

    PubMed

    Devapriya, Priyantha; Strömblad, Christopher T B; Bailey, Matthew D; Frazier, Seth; Bulger, John; Kemberling, Sharon T; Wood, Kenneth E

    2015-10-01

    The ability to accurately measure and assess current and potential health care system capacities is an issue of local and national significance. Recent joint statements by the Institute of Medicine and the Agency for Healthcare Research and Quality have emphasized the need to apply industrial and systems engineering principles to improving health care quality and patient safety outcomes. To address this need, a decision support tool was developed for planning and budgeting of current and future bed capacity, and evaluating potential process improvement efforts. The Strategic Bed Analysis Model (StratBAM) is a discrete-event simulation model created after a thorough analysis of patient flow and data from Geisinger Health System's (GHS) electronic health records. Key inputs include: timing, quantity and category of patient arrivals and discharges; unit-level length of care; patient paths; and projected patient volume and length of stay. Key outputs include: admission wait time by arrival source and receiving unit, and occupancy rates. Electronic health records were used to estimate parameters for probability distributions and to build empirical distributions for unit-level length of care and for patient paths. Validation of the simulation model against GHS operational data confirmed its ability to model real-world data consistently and accurately. StratBAM was successfully used to evaluate the system impact of forecasted patient volumes and length of stay in terms of patient wait times, occupancy rates, and cost. The model is generalizable and can be appropriately scaled for larger and smaller health care settings.

  20. THREE DISCRETE GROUPS WITH HOMOGENEOUS CHEMISTRY ALONG THE RED GIANT BRANCH IN THE GLOBULAR CLUSTER NGC 2808

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carretta, E., E-mail: eugenio.carretta@oabo.inaf.it

    2014-11-10

    We present the homogeneous reanalysis of Mg and Al abundances from high resolution UVES/FLAMES spectra for 31 red giants in the globular cluster NGC 2808. We found a well defined Mg-Al anticorrelation reaching a regime of subsolar Mg abundance ratios, with a spread of about 1.4 dex in [Al/Fe]. The main result from the improved statistics of our sample is that the distribution of stars is not continuous along the anticorrelation because they are neatly clustered into three distinct clumps, each with different chemical compositions. One group (P) shows a primordial composition of field stars of similar metallicity, and the other twomore » (I and E) have increasing abundances of Al and decreasing abundances of Mg. The fraction of stars we found in the three components (P: 68%, I: 19%, E: 13%) is in excellent agreement with the ratios computed for the three distinct main sequences in NGC 2808: for the first time there is a clear correspondence between discrete photometric sequences of dwarfs and distinct groups of giants with homogeneous chemistry. The composition of the I group cannot be reproduced by mixing of matter with extreme processing in hot H-burning and gas with pristine, unprocessed composition, as also found in the recent analysis of three discrete groups in NGC 6752. This finding suggests that different classes of polluters were probably at work in NGC 2808 as well.« less

  1. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  2. Comparison of Three Methods for Wind Turbine Capacity Factor Estimation

    PubMed Central

    Ditkovich, Y.; Kuperman, A.

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  3. Curtain eruptions from Enceladus' south-polar terrain.

    PubMed

    Spitale, Joseph N; Hurford, Terry A; Rhoden, Alyssa R; Berkson, Emily E; Platts, Symeon S

    2015-05-07

    Observations of the south pole of the Saturnian moon Enceladus revealed large rifts in the south-polar terrain, informally called 'tiger stripes', named Alexandria, Baghdad, Cairo and Damascus Sulci. These fractures have been shown to be the sources of the observed jets of water vapour and icy particles and to exhibit higher temperatures than the surrounding terrain. Subsequent observations have focused on obtaining close-up imaging of this region to better characterize these emissions. Recent work examined those newer data sets and used triangulation of discrete jets to produce maps of jetting activity at various times. Here we show that much of the eruptive activity can be explained by broad, curtain-like eruptions. Optical illusions in the curtain eruptions resulting from a combination of viewing direction and local fracture geometry produce image features that were probably misinterpreted previously as discrete jets. We present maps of the total emission along the fractures, rather than just the jet-like component, for five times during an approximately one-year period in 2009 and 2010. An accurate picture of the style, timing and spatial distribution of the south-polar eruptions is crucial to evaluating theories for the mechanism controlling the eruptions.

  4. Curtain eruptions from Enceladus' south-polar terrain

    NASA Astrophysics Data System (ADS)

    Spitale, Joseph N.; Hurford, Terry A.; Rhoden, Alyssa R.; Berkson, Emily E.; Platts, Symeon S.

    2015-05-01

    Observations of the south pole of the Saturnian moon Enceladus revealed large rifts in the south-polar terrain, informally called `tiger stripes', named Alexandria, Baghdad, Cairo and Damascus Sulci. These fractures have been shown to be the sources of the observed jets of water vapour and icy particles and to exhibit higher temperatures than the surrounding terrain. Subsequent observations have focused on obtaining close-up imaging of this region to better characterize these emissions. Recent work examined those newer data sets and used triangulation of discrete jets to produce maps of jetting activity at various times. Here we show that much of the eruptive activity can be explained by broad, curtain-like eruptions. Optical illusions in the curtain eruptions resulting from a combination of viewing direction and local fracture geometry produce image features that were probably misinterpreted previously as discrete jets. We present maps of the total emission along the fractures, rather than just the jet-like component, for five times during an approximately one-year period in 2009 and 2010. An accurate picture of the style, timing and spatial distribution of the south-polar eruptions is crucial to evaluating theories for the mechanism controlling the eruptions.

  5. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  6. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  7. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  8. Temporal Coherence: A Model for Non-Stationarity in Natural and Simulated Wind Records

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinker, Jennifer M.; Gavin, Henri P.; Clifton, Andrew

    We present a novel methodology for characterizing and simulating non-stationary stochastic wind records. In this new method, non-stationarity is characterized and modelled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components. Temporal coherence can also be used to quantify non-stationary characteristics in wind data. Three case studies are presented that analyze the non-stationarity of turbulent wind data obtained at the National Wind Technology Center near Boulder, Colorado, USA. The first study compares the temporal and spectral characteristics of a stationary wind record and a non-stationary windmore » record in order to highlight their differences in temporal coherence. The second study examines the distribution of one of the proposed temporal coherence parameters and uses it to quantify the prevalence of nonstationarity in the dataset. The third study examines how temporal coherence varies with a range of atmospheric parameters to determine what conditions produce more non-stationarity.« less

  9. Is there substructure around M87?

    NASA Astrophysics Data System (ADS)

    Oldham, L. J.; Evans, N. W.

    2016-10-01

    We present a general method to identify infalling substructure in discrete data sets with position and line-of-sight velocity data. We exploit the fact that galaxies falling on to a brightest cluster galaxy (BCG) in a virialized cluster, or dwarf satellites falling on to a central galaxy like the Milky Way, follow nearly radial orbits. If the orbits are exactly radial, we show how to find the probability distribution for a satellite's energy, given a tracer density for the satellite population, by solving an Abel integral equation. This is an extension of Eddington's classical formula for the isotropic distribution function. When applied to a system of galaxies, clustering in energy space can then be quantified using the Kullback-Leibler divergence, and groups of objects can be identified which, though separated in the sky, may be falling in on the same orbit. This method is tested using mock data and applied to the satellite galaxy population around M87, the BCG in Virgo, and a number of associations that are found, which may represent infalling galaxy groups.

  10. Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Eisenstein, Daniel J.

    2017-07-01

    We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.

  11. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  12. Optimum random and age replacement policies for customer-demand multi-state system reliability under imperfect maintenance

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang

    2016-04-01

    This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.

  13. Practical secure quantum communications

    NASA Astrophysics Data System (ADS)

    Diamanti, Eleni

    2015-05-01

    We review recent advances in the field of quantum cryptography, focusing in particular on practical implementations of two central protocols for quantum network applications, namely key distribution and coin flipping. The former allows two parties to share secret messages with information-theoretic security, even in the presence of a malicious eavesdropper in the communication channel, which is impossible with classical resources alone. The latter enables two distrustful parties to agree on a random bit, again with information-theoretic security, and with a cheating probability lower than the one that can be reached in a classical scenario. Our implementations rely on continuous-variable technology for quantum key distribution and on a plug and play discrete-variable system for coin flipping, and necessitate a rigorous security analysis adapted to the experimental schemes and their imperfections. In both cases, we demonstrate the protocols with provable security over record long distances in optical fibers and assess the performance of our systems as well as their limitations. The reported advances offer a powerful toolbox for practical applications of secure communications within future quantum networks.

  14. Multiscale Path Metrics for the Analysis of Discrete Geometric Structures

    DTIC Science & Technology

    2017-11-30

    Report: Multiscale Path Metrics for the Analysis of Discrete Geometric Structures The views, opinions and/or findings contained in this report are those...Analysis of Discrete Geometric Structures Report Term: 0-Other Email: tomasi@cs.duke.edu Distribution Statement: 1-Approved for public release

  15. Method for detecting and avoiding flight hazards

    NASA Astrophysics Data System (ADS)

    von Viebahn, Harro; Schiefele, Jens

    1997-06-01

    Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.

  16. A description of discrete internal representation schemes for visual pattern discrimination.

    PubMed

    Foster, D H

    1980-01-01

    A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.

  17. Heterogeneity of clonal patterns among patches of kudzu, Pueraria montana var. lobata, an invasive plant

    PubMed Central

    Kartzinel, Tyler R.; Hamrick, J. L.; Wang, Chongyun; Bowsher, Alan W.; Quigley, Bryan G. P.

    2015-01-01

    Background and Aims Viny species are among the most serious invasive plants, and better knowledge of how vines grow to dominate landscapes is needed. Patches may contain a single genotype (i.e. genet), a competitively dominant genet or many independent but interacting genets, yet the clonal structure of vining species is often not apparent. Molecular markers can discriminate among the genetic identities of entwined vines to reveal the number and spatial distribution of genets. This study investigated how genets are spatially distributed within and among discrete patches of the invasive vine kudzu, Pueraria montana var. lobata, in the United States. It was expected that ramets of genets would be spatially clustered within patches, and that an increase in the number of genets within a patch would be associated with a decrease in the average size of each genet. Methods Six discrete kudzu patches were sampled across 2 years, and 1257 samples were genotyped at 21 polymorphic allozyme loci. Variation in genotypic and genetic diversity among patches was quantified and patterns of genet interdigitation were analysed. Key Results Substantial genotypic and genetic variation occurred within and among patches. As few as ten overlapping genets spanned up to 68 m2 in one patch, while >90 % of samples were genetically unique in another patch. Genotypic diversity within patches increased as mean clone size decreased, although spatially widespread genets did not preclude interdigitation. Eight genets were shared across ≥2 patches, suggesting that vegetative dispersal can occur among patches. Conclusions Genetically unique kudzu vines are highly interdigitated. Multiple vegetative propagules have become established in spatially discrete patches, probably through the movement of highway construction or maintenance machinery. The results suggest that common methods for controlling invasive vines (e.g. mowing) may inadvertently increase genotypic diversity. Thus, understanding vine architecture and growth has practical implications. PMID:26229064

  18. Decision making under uncertainty: a quasimetric approach.

    PubMed

    N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques

    2013-01-01

    We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.

  19. Influence of adhesion and friction on the geometry of packings of spherical particles

    NASA Astrophysics Data System (ADS)

    Martin, C. L.; Bordia, R. K.

    2008-03-01

    We study the effect of both adhesion and friction on the geometry of monosized packings of spheres by means of discrete element simulations. We use elastic properties that are characteristic of materials typically used for particulate processing (Young’s modulus in the range 20-200 GPa). The geometrical features, both global and local, of the packings are studied using a variety of approaches in order to investigate their ability to quantify the effect of adhesion and/or friction. We show that both adhesion and friction interaction decrease the packing fraction. The very localized ordering that adhesion triggers is particularly investigated by use of the radial distribution function, the ordering parameter Q6 , and four triclinic cells that allow a description of the microstructure at the local level. We show that the probability of occurrence of these triclinic cells is approximately proportional to their degree of freedom when neither adhesion nor friction plays a role. We find that the introduction of adhesive interactions increases the probability of occurrence of those cells that have the lowest degree of freedom.

  20. High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning

    PubMed Central

    Sun, Wen; Patil, Sachin; Alterovitz, Ron

    2015-01-01

    As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645

  1. Insights into the latent multinomial model through mark-resight data on female grizzly bears with cubs-of-the-year

    USGS Publications Warehouse

    Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.

    2013-01-01

    Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.

  2. Approximation of discrete-time LQG compensators for distributed systems with boundary input and unbounded measurement

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.

  3. Chronic hypothermia following tuberculous meningitis.

    PubMed Central

    Dick, D J; Sanders, G L; Saunders, M; Rawlins, M D

    1981-01-01

    A patient who developed chronic hypothermia following tuberculous meningitis is described. A central defect of thermoregulation was discovered, probably due to a discrete vascular lesion in the anterior hypothalmus. PMID:6785394

  4. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  5. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  6. Universality in the dynamical properties of seismic vibrations

    NASA Astrophysics Data System (ADS)

    Chatterjee, Soumya; Barat, P.; Mukherjee, Indranil

    2018-02-01

    We have studied the statistical properties of the observed magnitudes of seismic vibration data in discrete time in an attempt to understand the underlying complex dynamical processes. The observed magnitude data are taken from six different geographical locations. All possible magnitudes are considered in the analysis including catastrophic vibrations, foreshocks, aftershocks and commonplace daily vibrations. The probability distribution functions of these data sets obey scaling law and display a certain universality characteristic. To investigate the universality features in the observed data generated by a complex process, we applied Random Matrix Theory (RMT) in the framework of Gaussian Orthogonal Ensemble (GOE). For all these six places the observed data show a close fit with the predictions of RMT. This reinforces the idea of universality in the dynamical processes generating seismic vibrations.

  7. Rényi and Tsallis formulations of separability conditions in finite dimensions

    NASA Astrophysics Data System (ADS)

    Rastegin, Alexey E.

    2017-12-01

    Separability conditions for a bipartite quantum system of finite-dimensional subsystems are formulated in terms of Rényi and Tsallis entropies. Entropic uncertainty relations often lead to entanglement criteria. We propose new approach based on the convolution of discrete probability distributions. Measurements on a total system are constructed of local ones according to the convolution scheme. Separability conditions are derived on the base of uncertainty relations of the Maassen-Uffink type as well as majorization relations. On each of subsystems, we use a pair of sets of subnormalized vectors that form rank-one POVMs. We also obtain entropic separability conditions for local measurements with a special structure, such as mutually unbiased bases and symmetric informationally complete measurements. The relevance of the derived separability conditions is demonstrated with several examples.

  8. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  9. The importance of risk-aversion as a measurable psychological parameter governing risk-taking behaviour

    NASA Astrophysics Data System (ADS)

    Thomas, P. J.

    2013-09-01

    A utility function with risk-aversion as its sole parameter is developed and used to examine the well-known psychological phenomenon, whereby risk averse people adopt behavioural strategies that are extreme and apparently highly risky. The pioneering work of the psychologist, John W. Atkinson, is revisited, and utility theory is used to extend his mathematical model. His explanation of the psychology involved is improved by regarding risk-aversion not as a discrete variable with three possible states: risk averse, risk neutral and risk confident, but as continuous and covering a large range. A probability distribution is derived, the "motivational density", to describe the process of selecting tasks of different degrees of difficulty. An assessment is then made of practicable methods for measuring risk-aversion.

  10. Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Liu, J.; Liu, L.

    2018-04-01

    Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.

  11. Means and Variances without Calculus

    ERIC Educational Resources Information Center

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  12. Stability and bifurcation analysis for the Kaldor-Kalecki model with a discrete delay and a distributed delay

    NASA Astrophysics Data System (ADS)

    Yu, Jinchen; Peng, Mingshu

    2016-10-01

    In this paper, a Kaldor-Kalecki model of business cycle with both discrete and distributed delays is considered. With the corresponding characteristic equation analyzed, the local stability of the positive equilibrium is investigated. It is found that there exist Hopf bifurcations when the discrete time delay passes a sequence of critical values. By applying the method of multiple scales, the explicit formulae which determine the direction of Hopf bifurcation and the stability of bifurcating periodic solutions are derived. Finally, numerical simulations are carried out to illustrate our main results.

  13. Study of a Terrain-Based Motion Estimation Model to Predict the Position of a Moving Target to Enhance Weapon Probability of Kill

    DTIC Science & Technology

    2017-09-01

    target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO

  14. A methodology for risk analysis based on hybrid Bayesian networks: application to the regasification system of liquefied natural gas onboard a floating storage and regasification unit.

    PubMed

    Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López

    2014-12-01

    This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.

  15. Moving Out: Transition to Non-Residence among Resident Fathers in the United States, 1968-1997

    ERIC Educational Resources Information Center

    Gupta, Sanjiv; Smock, Pamela J.; Manning, Wendy D.

    2004-01-01

    This article provides the first individual-level estimates of the change over time in the probability of non-residence for initially resident fathers in the United States. Drawing on the 1968-1997 waves of the Panel Study of Income Dynamics, we used discrete-time event history models to compute the probabilities of non-residence for six 5-year…

  16. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  17. Discrete ellipsoidal statistical BGK model and Burnett equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  18. A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, R; Wang, C; Dimarco, G

    2007-10-09

    If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less

  19. A study of renal blood flow regulation using the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Pavlova, Olga N.; Mosekilde, Erik; Sosnovtseva, Olga V.

    2010-02-01

    In this paper we provide a way to distinguish features of renal blood flow autoregulation mechanisms in normotensive and hypertensive rats based on the discrete wavelet transform. Using the variability of the wavelet coefficients we show distinctions that occur between the normal and pathological states. A reduction of this variability in hypertension is observed on the microscopic level of the blood flow in efferent arteriole of single nephrons. This reduction is probably associated with higher flexibility of healthy cardiovascular system.

  20. Stability and Hopf bifurcation for a regulated logistic growth model with discrete and distributed delays

    NASA Astrophysics Data System (ADS)

    Fang, Shengle; Jiang, Minghui

    2009-12-01

    In this paper, we investigate the stability and Hopf bifurcation of a new regulated logistic growth with discrete and distributed delays. By choosing the discrete delay τ as a bifurcation parameter, we prove that the system is locally asymptotically stable in a range of the delay and Hopf bifurcation occurs as τ crosses a critical value. Furthermore, explicit algorithm for determining the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions is derived by normal form theorem and center manifold argument. Finally, an illustrative example is also given to support the theoretical results.

  1. Discrete shaped strain sensors for intelligent structures

    NASA Technical Reports Server (NTRS)

    Andersson, Mark S.; Crawley, Edward F.

    1992-01-01

    Design of discrete, highly distributed sensor systems for intelligent structures has been studied. Data obtained indicate that discrete strain-averaging sensors satisfy the functional requirements for distributed sensing of intelligent structures. Bartlett and Gauss-Hanning sensors, in particular, provide good wavenumber characteristics while meeting the functional requirements. They are characterized by good rolloff rates and positive Fourier transforms for all wavenumbers. For the numerical integration schemes, Simpson's rule is considered to be very simple to implement and consistently provides accurate results for five sensors or more. It is shown that a sensor system that satisfies the functional requirements can be applied to a structure that supports mode shapes with purely sinusoidal curvature.

  2. Distributed-observer-based cooperative control for synchronization of linear discrete-time multi-agent systems.

    PubMed

    Liang, Hongjing; Zhang, Huaguang; Wang, Zhanshan

    2015-11-01

    This paper considers output synchronization of discrete-time multi-agent systems with directed communication topologies. The directed communication graph contains a spanning tree and the exosystem as its root. Distributed observer-based consensus protocols are proposed, based on the relative outputs of neighboring agents. A multi-step algorithm is presented to construct the observer-based protocols. In light of the discrete-time algebraic Riccati equation and internal model principle, synchronization problem is completed. At last, numerical simulation is provided to verify the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  4. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  5. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.

  6. Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.

  7. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  8. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  9. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  10. Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices

    NASA Astrophysics Data System (ADS)

    Vishnepolsky, Rachel

    A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.

  11. A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.

    PubMed

    Lim, Jongwoo; Wang, Bohyun; Lim, Joon S

    2016-04-29

    Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.

  12. 11Li Breakup on 208 at energies around the Coulomb barrier.

    PubMed

    Fernández-García, J P; Cubero, M; Rodríguez-Gallardo, M; Acosta, L; Alcorta, M; Alvarez, M A G; Borge, M J G; Buchmann, L; Diget, C A; Falou, H A; Fulton, B R; Fynbo, H O U; Galaviz, D; Gómez-Camacho, J; Kanungo, R; Lay, J A; Madurga, M; Martel, I; Moro, A M; Mukha, I; Nilsson, T; Sánchez-Benítez, A M; Shotter, A; Tengblad, O; Walden, P

    2013-04-05

    The inclusive breakup for the (11)Li + (208)Pb reaction at energies around the Coulomb barrier has been measured for the first time. A sizable yield of (9)Li following the (11)Li dissociation has been observed, even at energies well below the Coulomb barrier. Using the first-order semiclassical perturbation theory of Coulomb excitation it is shown that the breakup probability data measured at small angles can be used to extract effective breakup energy as well as the slope of B(E1) distribution close to the threshold. Four-body continuum-discretized coupled-channels calculations, including both nuclear and Coulomb couplings between the target and projectile to all orders, reproduce the measured inclusive breakup cross sections and support the presence of a dipole resonance in the (11)Li continuum at low excitation energy.

  13. Siegert-state expansion for nonstationary systems. IV. Three-dimensional case

    NASA Astrophysics Data System (ADS)

    Tolstikhin, Oleg I.

    2008-03-01

    The Siegert-state expansion approach [O. I. Tolstikhin, Phys. Rev. A 73, 062705 (2006)] is extended to the three-dimensional case. Coupled equations defining the time evolution of coefficients in the expansion of the solution to the time-dependent Schrödinger equation in terms of partial-wave Siegert states are derived, and physical observables (probabilities of transitions to discrete states and the momentum distribution of ejected particles) are expressed in terms of these coefficients. The approach is implemented in terms of Siegert pseudostates and illustrated by calculations of the photodetachment of H- by strong high-frequency laser pulses. The present calculations demonstrate that the interference effect in the laser-atom interaction dynamics found recently in the one-dimensional case [K. Toyota , Phys. Rev. A 76, 043418 (2007)] reveals itself in the three-dimensional case as well.

  14. Stochastic Evolution Equations Driven by Fractional Noises

    DTIC Science & Technology

    2016-11-28

    rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian

  15. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  16. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  17. State-and-transition simulation models: a framework for forecasting landscape change

    USGS Publications Warehouse

    Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée

    2016-01-01

    SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.

  18. Numerical modeling of fluid-structure interaction in arteries with anisotropic polyconvex hyperelastic and anisotropic viscoelastic material models at finite strains.

    PubMed

    Balzani, Daniel; Deparis, Simone; Fausten, Simon; Forti, Davide; Heinlein, Alexander; Klawonn, Axel; Quarteroni, Alfio; Rheinbach, Oliver; Schröder, Joerg

    2016-10-01

    The accurate prediction of transmural stresses in arterial walls requires on the one hand robust and efficient numerical schemes for the solution of boundary value problems including fluid-structure interactions and on the other hand the use of a material model for the vessel wall that is able to capture the relevant features of the material behavior. One of the main contributions of this paper is the application of a highly nonlinear, polyconvex anisotropic structural model for the solid in the context of fluid-structure interaction, together with a suitable discretization. Additionally, the influence of viscoelasticity is investigated. The fluid-structure interaction problem is solved using a monolithic approach; that is, the nonlinear system is solved (after time and space discretizations) as a whole without splitting among its components. The linearized block systems are solved iteratively using parallel domain decomposition preconditioners. A simple - but nonsymmetric - curved geometry is proposed that is demonstrated to be suitable as a benchmark testbed for fluid-structure interaction simulations in biomechanics where nonlinear structural models are used. Based on the curved benchmark geometry, the influence of different material models, spatial discretizations, and meshes of varying refinement is investigated. It turns out that often-used standard displacement elements with linear shape functions are not sufficient to provide good approximations of the arterial wall stresses, whereas for standard displacement elements or F-bar formulations with quadratic shape functions, suitable results are obtained. For the time discretization, a second-order backward differentiation formula scheme is used. It is shown that the curved geometry enables the analysis of non-rotationally symmetric distributions of the mechanical fields. For instance, the maximal shear stresses in the fluid-structure interface are found to be higher in the inner curve that corresponds to clinical observations indicating a high plaque nucleation probability at such locations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Category representations in the brain are both discretely localized and widely distributed.

    PubMed

    Shehzad, Zarrar; McCarthy, Gregory

    2018-06-01

    Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.

  20. DiscML: an R package for estimating evolutionary rates of discrete characters using maximum likelihood.

    PubMed

    Kim, Tane; Hao, Weilong

    2014-09-27

    The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics.

  1. Discrete time Markov chains (DTMC) susceptible infected susceptible (SIS) epidemic model with two pathogens in two patches

    NASA Astrophysics Data System (ADS)

    Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami

    2017-06-01

    The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.

  2. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  3. Hydrometeorological hazards basin "El Salado", with detailed analysis of the micro "El Zarco" and "Tamarindos" in the municipality of Puerto Vallarta, Jalisco

    NASA Astrophysics Data System (ADS)

    Núñez Gutiérrez, M.

    2013-05-01

    In recent years, there has been a change in regard to the hazard of flooding in the basin environment "Salting" specifically in watersheds of streams "El Zarco" and "Tamarind", located in the area of Township north of Puerto Vallara, Jalisco, lately it has become precipitation, of a cyclonic convective having with it, but short-lived intensive storms, and coupled with the growth of the metropolitan area of Puerto Vallarta, which has clogged up the drainage outlet sea water stored on site until it disappears evapotranspiration. Hydrometeorological analysis is performed based on the triangulation method where hydrometric records are used, by the weather station of "The Desembocada" of Puerto Vallarta, which is the only one authorized by the CNA, however the main source that handles official values of the weather stations in the Mexican Republic, is the database ERIC III (Rapid Information Extractor climatological version III), and in their weather stations precipitation data and temperature average, minimum and maximum monthly are available. This is combined with probabilistic methods, based on the exploration of the probability distribution function (FDP) with the method of small distributions where methods are used Pearson's chi-square, Student t, Fisher F, for smaller data less than 30 years and the functions of discrete or continuous probability to estimate rainfall intensity also used digital terrain models with sufficient mapping for elevations, precipitation, temperature (SIG).;

  4. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

    PubMed

    Probst, Dimitri; Petrovici, Mihai A; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

  5. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

    PubMed Central

    Probst, Dimitri; Petrovici, Mihai A.; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems. PMID:25729361

  6. Satellite Based Probabilistic Snow Cover Extent Mapping (SCE) at Hydro-Québec

    NASA Astrophysics Data System (ADS)

    Teasdale, Mylène; De Sève, Danielle; Angers, Jean-François; Perreault, Luc

    2016-04-01

    Over 40% of Canada's water resources are in Quebec and Hydro-Quebec has developed potential to become one of the largest producers of hydroelectricity in the world, with a total installed capacity of 36,643 MW. The Hydro-Québec fleet park includes 27 large reservoirs with a combined storage capacity of 176 TWh, and 668 dams and 98 controls. Thus, over 98% of all electricity used to supply the domestic market comes from water resources and the excess output is sold on the wholesale markets. In this perspective the efficient management of water resources is needed and it is based primarily on a good river flow estimation including appropriate hydrological data. Snow on ground is one of the significant variables representing 30% to 40% of its annual energy reserve. More specifically, information on snow cover extent (SCE) and snow water equivalent (SWE) is crucial for hydrological forecasting, particularly in northern regions since the snowmelt provides the water that fills the reservoirs and is subsequently used for hydropower generation. For several years Hydro Quebec's research institute ( IREQ) developed several algorithms to map SCE and SWE. So far all the methods were deterministic. However, given the need to maximize the efficient use of all resources while ensuring reliability, the electrical systems must now be managed taking into account all risks. Since snow cover estimation is based on limited spatial information, it is important to quantify and handle its uncertainty in the hydrological forecasting system. This paper presents the first results of a probabilistic algorithm for mapping SCE by combining Bayesian mixture of probability distributions and multiple logistic regression models applied to passive microwave data. This approach allows assigning for each grid point, probabilities to the set of the mutually exclusive discrete outcomes: "snow" and "no snow". Its performance was evaluated using the Brier score since it is particularly appropriate to measure the accuracy of probabilistic discrete predictions. The scores were measured by comparing the snow probabilities produced by our models with the Hydro-Québec's snow ground data.

  7. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  8. Two-walker discrete-time quantum walks on the line with percolation

    NASA Astrophysics Data System (ADS)

    Rigovacca, L.; di Franco, C.

    2016-02-01

    One goal in the quantum-walk research is the exploitation of the intrinsic quantum nature of multiple walkers, in order to achieve the full computational power of the model. Here we study the behaviour of two non-interacting particles performing a quantum walk on the line when the possibility of lattice imperfections, in the form of missing links, is considered. We investigate two regimes, statical and dynamical percolation, that correspond to different time scales for the imperfections evolution with respect to the quantum-walk one. By studying the qualitative behaviour of three two-particle quantities for different probabilities of having missing bonds, we argue that the chosen symmetry under particle-exchange of the input state strongly affects the output of the walk, even in noisy and highly non-ideal regimes. We provide evidence against the possibility of gathering information about the walkers indistinguishability from the observation of bunching phenomena in the output distribution, in all those situations that require a comparison between averaged quantities. Although the spread of the walk is not substantially changed by the addition of a second particle, we show that the presence of multiple walkers can be beneficial for a procedure to estimate the probability of having a broken link.

  9. Mechanisms of stochastic focusing and defocusing in biological reaction networks: insight from accurate chemical master equation (ACME) solutions.

    PubMed

    Gursoy, Gamze; Terebus, Anna; Youfang Cao; Jie Liang

    2016-08-01

    Stochasticity plays important roles in regulation of biochemical reaction networks when the copy numbers of molecular species are small. Studies based on Stochastic Simulation Algorithm (SSA) has shown that a basic reaction system can display stochastic focusing (SF) by increasing the sensitivity of the network as a result of the signal noise. Although SSA has been widely used to study stochastic networks, it is ineffective in examining rare events and this becomes a significant issue when the tails of probability distributions are relevant as is the case of SF. Here we use the ACME method to solve the exact solution of the discrete Chemical Master Equations and to study a network where SF was reported. We showed that the level of SF depends on the degree of the fluctuations of signal molecule. We discovered that signaling noise under certain conditions in the same reaction network can lead to a decrease in the system sensitivities, thus the network can experience stochastic defocusing. These results highlight the fundamental role of stochasticity in biological reaction networks and the need for exact computation of probability landscape of the molecules in the system.

  10. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.

    2016-08-25

    Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less

  11. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  12. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  13. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  14. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  15. Modeling discrete and rhythmic movements through motor primitives: a review.

    PubMed

    Degallier, Sarah; Ijspeert, Auke

    2010-10-01

    Rhythmic and discrete movements are frequently considered separately in motor control, probably because different techniques are commonly used to study and model them. Yet the increasing interest in finding a comprehensive model for movement generation requires bridging the different perspectives arising from the study of those two types of movements. In this article, we consider discrete and rhythmic movements within the framework of motor primitives, i.e., of modular generation of movements. In this way we hope to gain an insight into the functional relationships between discrete and rhythmic movements and thus into a suitable representation for both of them. Within this framework we can define four possible categories of modeling for discrete and rhythmic movements depending on the required command signals and on the spinal processes involved in the generation of the movements. These categories are first discussed in terms of biological concepts such as force fields and central pattern generators and then illustrated by several mathematical models based on dynamical system theory. A discussion on the plausibility of theses models concludes the work.

  16. Parrondo's game using a discrete-time quantum walk

    NASA Astrophysics Data System (ADS)

    Chandrashekar, C. M.; Banerjee, Subhashish

    2011-04-01

    We present a new form of a Parrondo game using discrete-time quantum walk on a line. The two players A and B with different quantum coins operators, individually losing the game can develop a strategy to emerge as joint winners by using their coins alternatively, or in combination for each step of the quantum walk evolution. We also present a strategy for a player A ( B) to have a winning probability more than player B ( A). Significance of the game strategy in information theory and physical applications are also discussed.

  17. A Mathematics Software Database Update.

    ERIC Educational Resources Information Center

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  18. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  19. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  20. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  1. Parallel and Distributed Computing Combinatorial Algorithms

    DTIC Science & Technology

    1993-10-01

    Discrete Math , 1991. In press. [551 L. Finkelstein, D. Kleitman, and T. Leighton. Applying the classification theorem for finite simple groups to minimize...Mathematics (in press). [741 L. Heath, T. Leighton, and A. Rosenberg. Comparing queue and stack layouts. SIAM J Discrete Math , 5(3):398-412, August 1992...line can meet only a few. DIMA CS Series in Discrete Math and Theoretical Computer Science, 9, 1993. Publications, Presentations and Theses Supported

  2. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  3. Optimal reactive power planning for distribution systems considering intermittent wind power using Markov model and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Cheng

    Wind farms, photovoltaic arrays, fuel cells, and micro-turbines are all considered to be Distributed Generation (DG). DG is defined as the generation of power which is dispersed throughout a utility's service territory and either connected to the utility's distribution system or isolated in a small grid. This thesis addresses modeling and economic issues pertaining to the optimal reactive power planning for distribution system with wind power generation (WPG) units. Wind farms are inclined to cause reverse power flows and voltage variations due to the random-like outputs of wind turbines. To deal with this kind of problem caused by wide spread usage of wind power generation, this thesis investigates voltage and reactive power controls in such a distribution system. Consequently static capacitors (SC) and transformer taps are introduced into the system and treated as controllers. For the purpose of getting optimum voltage and realizing reactive power control, the research proposes a proper coordination among the controllers like on-load tap changer (OLTC), feeder-switched capacitors. What's more, in order to simulate its uncertainty, the wind power generation is modeled by the Markov model. In that way, calculating the probabilities for all the scenarios is possible. Some outputs with consecutive and discrete values have been used for transition between successive time states and within state wind speeds. The thesis will describe the method to generate the wind speed time series from the transition probability matrix. After that, utilizing genetic algorithm, the optimal locations of SCs, the sizes of SCs and transformer taps are determined so as to minimize the cost or minimize the power loss, and more importantly improve voltage profiles. The applicability of the proposed method is verified through simulation on a 9-bus system and a 30-bus system respectively. At last, the simulation results indicate that as long as the available capacitors are able to sufficiently compensate the reactive power demand, the DG operation no longer imposes a significant effect on the voltage fluctuations in the distribution system. And the proposed approach is efficient, simple and straightforward.

  4. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  5. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    PubMed

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Different strategies for spatial updating in yaw and pitch path integration

    PubMed Central

    Goeke, Caspar M.; König, Peter; Gramann, Klaus

    2013-01-01

    Research in spatial navigation revealed the existence of discrete strategies defined by the use of distinct reference frames during virtual path integration. The present study investigated the distribution of these navigation strategies as a function of gender, video gaming experience, and self-estimates of spatial navigation abilities in a population of 300 subjects. Participants watched videos of virtual passages through a star-field with one turn in either the horizontal (yaw) or the vertical (pitch) axis. At the end of a passage they selected one out of four homing arrows to indicate the initial starting location. To solve the task, participants could employ two discrete strategies, navigating within either an egocentric or an allocentric reference frame. The majority of valid subjects (232/260) consistently used the same strategy in more than 75% of all trials. With that approach 33.1% of all participants were classified as Turners (using an egocentric reference frame on both axes) and 46.5% as Non-turners (using an allocentric reference frame on both axes). 9.2% of all participants consistently used an egocentric reference frame in the yaw plane but an allocentric reference frame in the pitch plane (Switcher). Investigating the influence of gender on navigation strategies revealed that females predominantly used the Non-turner strategy while males used both the Turner and the Non-turner strategy with comparable probabilities. Other than expected, video gaming experience did not influence strategy use. Based on a strong quantitative basis with the sample size about an order of magnitude larger than in typical psychophysical studies these results demonstrate that most people reliably use one out of three possible navigation strategies (Turners, Non-turners, Switchers) for spatial updating and provides a sound estimate of how those strategies are distributed within the general population. PMID:23412683

  7. Discrete Event Simulation of Distributed Team Communication

    DTIC Science & Technology

    2012-03-22

    performs, and auditory information that is provided through multiple audio devices with speech response. This paper extends previous discrete event workload...2008, pg. 1) notes that “Architecture modeling furnishes abstrac- tions for use in managing complexities, allowing engineers to visualise the proposed

  8. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  9. Maxima of two random walks: Universal statistics of lead changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Naim, E.; Krapivsky, P. L.; Randon-Furling, J.

    2016-04-18

    In this study, we investigate statistics of lead changes of the maxima of two discrete-time random walks in one dimension. We show that the average number of lead changes grows asmore » $${\\pi }^{-1}\\mathrm{ln}t$$ in the long-time limit. We present theoretical and numerical evidence that this asymptotic behavior is universal. Specifically, this behavior is independent of the jump distribution: the same asymptotic underlies standard Brownian motion and symmetric Lévy flights. We also show that the probability to have at most n lead changes behaves as $${t}^{-1/4}{(\\mathrm{ln}t)}^{n}$$ for Brownian motion and as $${t}^{-\\beta (\\mu )}{(\\mathrm{ln}t)}^{n}$$ for symmetric Lévy flights with index μ. The decay exponent $$\\beta \\equiv \\beta (\\mu )$$ varies continuously with the Lévy index when $$0\\lt \\mu \\lt 2$$, and remains constant $$\\beta =1/4$$ for $$\\mu \\gt 2$$.« less

  10. Statistical plant set estimation using Schroeder-phased multisinusoidal input design

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.

  11. On the entropy function in sociotechnical systems

    PubMed Central

    Montroll, Elliott W.

    1981-01-01

    The entropy function H = -Σpj log pj (pj being the probability of a system being in state j) and its continuum analogue H = ∫p(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion. PMID:16593136

  12. On the entropy function in sociotechnical systems.

    PubMed

    Montroll, E W

    1981-12-01

    The entropy function H = -Sigmap(j) log p(j) (p(j) being the probability of a system being in state j) and its continuum analogue H = integralp(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion.

  13. Stochastic Dynamics through Hierarchically Embedded Markov Chains

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.

    2017-02-01

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  14. Real time visualization of quantum walk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyazaki, Akihide; Hamada, Shinji; Sekino, Hideo

    2014-02-20

    Time evolution of quantum particles like electrons is described by time-dependent Schrödinger equation (TDSE). The TDSE is regarded as the diffusion equation of electrons with imaginary diffusion coefficients. And the TDSE is solved by quantum walk (QW) which is regarded as a quantum version of a classical random walk. The diffusion equation is solved in discretized space/time as in the case of classical random walk with additional unitary transformation of internal degree of freedom typical for quantum particles. We call the QW for solution of the TDSE a Schrödinger walk (SW). For observation of one quantum particle evolution under amore » given potential in atto-second scale, we attempt a successive computation and visualization of the SW. Using Pure Data programming, we observe the correct behavior of a probability distribution under the given potential in real time for observers of atto-second scale.« less

  15. Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2017-07-01

    We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.

  16. Behavior identification based on geotagged photo data set.

    PubMed

    Liu, Guo-qi; Zhang, Yi-jia; Fu, Ying-mao; Liu, Ying

    2014-01-01

    The popularity of mobile devices has produced a set of image data with geographic information, time information, and text description information, which is called geotagged photo data set. The division of this kind of data by its behavior and the location not only can identify the user's important location and daily behavior, but also helps users to sort the huge image data. This paper proposes a method to build an index based on multiple classification result, which can divide the data set multiple times and distribute labels to the data to build index according to the estimated probability of classification results in order to accomplish the identification of users' important location and daily behaviors. This paper collects 1400 discrete sets of data as experimental data to verify the method proposed in this paper. The result of the experiment shows that the index and actual tagging results have a high inosculation.

  17. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    PubMed

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  18. Multiporosity flow in fractured low-permeability rocks: Extension to shale hydrocarbon reservoirs

    DOE PAGES

    Kuhlman, Kristopher L.; Malama, Bwalya; Heath, Jason E.

    2015-02-05

    We presented a multiporosity extension of classical double and triple-porosity fractured rock flow models for slightly compressible fluids. The multiporosity model is an adaptation of the multirate solute transport model of Haggerty and Gorelick (1995) to viscous flow in fractured rock reservoirs. It is a generalization of both pseudo steady state and transient interporosity flow double-porosity models. The model includes a fracture continuum and an overlapping distribution of multiple rock matrix continua, whose fracture-matrix exchange coefficients are specified through a discrete probability mass function. Semianalytical cylindrically symmetric solutions to the multiporosity mathematical model are developed using the Laplace transform tomore » illustrate its behavior. Furthermore, the multiporosity model presented here is conceptually simple, yet flexible enough to simulate common conceptualizations of double and triple-porosity flow. This combination of generality and simplicity makes the multiporosity model a good choice for flow modelling in low-permeability fractured rocks.« less

  19. The detailed balance principle and the reciprocity theorem between photocarrier collection and dark carrier distribution in solar cells

    NASA Astrophysics Data System (ADS)

    Rau, Uwe; Brendel, Rolf

    1998-12-01

    It is shown that a recently described general relationship between the local collection efficiency of solar cells and the dark carrier concentration (reciprocity theorem) directly follows from the principle of detailed balance. We derive the relationship for situations where transport of charge carriers occurs between discrete states as well as for the situation where electronic transport is described in terms of continuous functions. Combining both situations allows to extend the range of applicability of the reciprocity theorem to all types of solar cells, including, e.g., metal-insulator-semiconductor-type, electrochemical solar cells, as well as the inclusion of the impurity photovoltaic effect. We generalize the theorem further to situations where the occupation probability of electronic states is governed by Fermi-Dirac statistics instead of Boltzmann statistics as underlying preceding work. In such a situation the reciprocity theorem is restricted to small departures from equilibrium.

  20. Sequence Complexity of Amyloidogenic Regions in Intrinsically Disordered Human Proteins

    PubMed Central

    Das, Swagata; Pal, Uttam; Das, Supriya; Bagga, Khyati; Roy, Anupam; Mrigwani, Arpita; Maiti, Nakul C.

    2014-01-01

    An amyloidogenic region (AR) in a protein sequence plays a significant role in protein aggregation and amyloid formation. We have investigated the sequence complexity of AR that is present in intrinsically disordered human proteins. More than 80% human proteins in the disordered protein databases (DisProt+IDEAL) contained one or more ARs. With decrease of protein disorder, AR content in the protein sequence was decreased. A probability density distribution analysis and discrete analysis of AR sequences showed that ∼8% residue in a protein sequence was in AR and the region was in average 8 residues long. The residues in the AR were high in sequence complexity and it seldom overlapped with low complexity regions (LCR), which was largely abundant in disorder proteins. The sequences in the AR showed mixed conformational adaptability towards α-helix, β-sheet/strand and coil conformations. PMID:24594841

  1. A Simulation Based Approach to Optimize Berth Throughput Under Uncertainty at Marine Container Terminals

    NASA Technical Reports Server (NTRS)

    Golias, Mihalis M.

    2011-01-01

    Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.

  2. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  3. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  4. Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank

    2014-12-07

    We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less

  5. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.

  6. On the Maximum-Weight Clique Problem.

    DTIC Science & Technology

    1985-06-01

    hypergeometric distribution", Discrete Math . 25, 285-287 .* CHVATAL, V. (1983), Linear Programming, W.H. Freeman, New York/San Francisco. COOK, S.A. (1971...Annals Discrete Math . 21, 325-356 GROTSCHEL, M., L. LOVASZ, and A. SCHRIJVER ((1984b), "Relaxations of Vertex Packing", Preprint No. 35...de Grenoble. See also N. Sbihi, "Algorithme de recherche d’un stable de cardinalite maximum dans un graphe sans etoile", Discrete Math . 19 (1980), 53

  7. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  8. Parameter estimation problems for distributed systems using a multigrid method

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Dutt, Pravir

    1990-01-01

    The problem of estimating spatially varying coefficients of partial differential equations is considered from observation of the solution and of the right hand side of the equation. It is assumed that the observations are distributed in the domain and that enough observations are given. A method of discretization and an efficient multigrid method for solving the resulting discrete systems are described. Numerical results are presented for estimation of coefficients in an elliptic and a parabolic partial differential equation.

  9. A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.

    PubMed

    Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L

    2009-12-21

    We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.

  10. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  11. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  12. Distribution-dependent robust linear optimization with applications to inventory control

    PubMed Central

    Kang, Seong-Cheol; Brisimi, Theodora S.

    2014-01-01

    This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579

  13. Microtubules as mechanical force sensors.

    PubMed

    Karafyllidis, Ioannis G; Lagoudas, Dimitris C

    2007-03-01

    Microtubules are polymers of tubulin subunits (dimers) arranged on a hexagonal lattice. Each tubulin dimer comprises two monomers, the alpha-tubulin and beta-tubulin, and can be found in two states. In the first state a mobile negative charge is located into the alpha-tubulin monomer and in the second into the beta-tubulin monomer. Each tubulin dimer is modeled as an electrical dipole coupled to its neighbors by electrostatic forces. The location of the mobile charge in each dimer depends on the location of the charges in the dimer's neighborhood. Mechanical forces that act on the microtubule affect the distances between the dimers and alter the electrostatic potential. Changes in this potential affect the mobile negative charge location in each dimer and the charge distribution in the microtubule. The net effect is that mechanical forces affect the charge distribution in microtubules. We propose to exploit this effect and use microtubules as mechanical force sensors. We model each dimer as a two-state quantum system and, following the quantum computation paradigm, we use discrete quantum random walk on the hexagonal microtubule lattice to determine the charge distribution. Different forces applied on the microtubule are modeled as different coin biases leading to different probability distributions of the quantum walker location, which are directly connected to different charge distributions. Simulation results show that there is a strong indication that microtubules can be used as mechanical force sensors and that they can also detect the force directions and magnitudes.

  14. What is the correct cost functional for variational data assimilation?

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2018-03-01

    Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.

  15. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  16. Global stabilization analysis of inertial memristive recurrent neural networks with discrete and distributed delays.

    PubMed

    Wang, Leimin; Zeng, Zhigang; Ge, Ming-Feng; Hu, Junhao

    2018-05-02

    This paper deals with the stabilization problem of memristive recurrent neural networks with inertial items, discrete delays, bounded and unbounded distributed delays. First, for inertial memristive recurrent neural networks (IMRNNs) with second-order derivatives of states, an appropriate variable substitution method is invoked to transfer IMRNNs into a first-order differential form. Then, based on nonsmooth analysis theory, several algebraic criteria are established for the global stabilizability of IMRNNs under proposed feedback control, where the cases with both bounded and unbounded distributed delays are successfully addressed. Finally, the theoretical results are illustrated via the numerical simulations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Design, implementation and application of distributed order PI control.

    PubMed

    Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan

    2013-05-01

    In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  19. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  20. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  1. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.

    PubMed

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde

    2015-08-28

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.

  2. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks

    PubMed Central

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde

    2015-01-01

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380

  3. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  4. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  5. Direct molecular identification of Trypanosoma cruzi discrete typing units in domestic and peridomestic Triatoma infestans and Triatoma sordida from the Argentine Chaco.

    PubMed

    Maffey, L; Cardinal, M V; Ordóñez-Krasnowski, P C; Lanati, L A; Lauricella, M A; Schijman, A G; Gürtler, R E

    2012-10-01

    We assessed the distribution of Trypanosoma cruzi Discrete Typing Units (DTUs) in domestic and peridomestic Triatoma infestans and Triatoma sordida specimens collected in a well-defined rural area in Pampa del Indio, northeastern Argentina. Microscopically-positive bugs were randomly selected with a multi-level sampling design, and DTUs were identified using direct PCR strategies. TcVI predominated in 61% of 69 T. infestans and in 56% of 9 T. sordida. TcV was the secondary DTU in T. infestans (16%) and was found in 1 T. sordida specimen (11%). Three T. sordida (33%) were found infected with TcI, a DTU also identified in local Didelphis albiventris opossums. Mixed DTU infections occurred rarely (5%) and were detected both directly from the bugs' rectal ampoule and parasite cultures. The identified DTUs and bug collection sites of T. infestans were significantly associated. Bugs infected with TcV were almost exclusively captured in domiciles whereas those with TcVI were found similarly in domiciles and peridomiciles. All mixed infections occurred in domiciles. TcV-infected bugs fed more often on humans than on dogs, whereas TcVI-infected bugs showed the reverse pattern. T. sordida is a probable sylvatic vector of TcI linked to D. albiventris, and could represent a secondary vector of TcVI and TcV in the domestic/peridomestic cycle.

  6. Direct molecular identification of Trypanosoma cruzi Discrete Typing Units in domestic and peridomestic Triatoma infestans and Triatoma sordida from the Argentine Chaco

    PubMed Central

    MAFFEY, L.; CARDINAL, M.V.; ORDÓÑEZ-KRASNOWSKI, P.C.; LANATI, L.A.; LAURICELLA, M.A.; SCHIJMAN, A.G.; GÜRTLER, R.E.

    2013-01-01

    SUMMARY We assessed the distribution of Trypanosoma cruzi Discrete Typing Units (DTUs) in domestic and peridomestic Triatoma infestans and Triatoma sordida specimens collected in a well-defined rural area in Pampa del Indio, northeastern Argentina. Microscopically-positive bugs were randomly selected with a multi-level sampling design, and DTUs were identified using direct PCR strategies. TcVI predominated in 61% of 69 T. infestans and in 56% of 9 T. sordida. TcV was the secondary DTU in T. infestans (16%) and was found in one T. sordida specimen (11%). Three T. sordida (33%) were found infected with TcI, a DTU also identified in local Didelphis albiventris opossums. Mixed DTU infections occurred rarely (5%) and were detected both directly from the bugs’ rectal ampoule and parasite cultures. The identified DTUs and bug collection sites of T. infestans were significantly associated. Bugs infected with TcV were almost exclusively captured in domiciles whereas those with TcVI were found similarly in domiciles and peridomiciles. All mixed infections occurred in domiciles. TcV-infected bugs fed more often on humans than on dogs, whereas TcVI-infected bugs showed the reverse pattern. T. sordida is a probable sylvatic vector of TcI linked to D. albiventris, and could represent a secondary vector of TcVI and TcV in the domestic/peridomestic cycle. PMID:23036510

  7. A low noise discrete velocity method for the Boltzmann equation with quantized rotational and vibrational energy

    NASA Astrophysics Data System (ADS)

    Clarke, Peter; Varghese, Philip; Goldstein, David

    2018-01-01

    A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.

  8. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  9. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Discrete Gust Model for Launch Vehicle Assessments

    NASA Technical Reports Server (NTRS)

    Leahy, Frank B.

    2008-01-01

    Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.

  11. From Movements to Actions: Two Mechanisms for Learning Action Sequences

    ERIC Educational Resources Information Center

    Endress, Ansgar D.; Wood, Justin N.

    2011-01-01

    When other individuals move, we interpret their movements as discrete, hierarchically-organized, goal-directed actions. However, the mechanisms that integrate visible movement features into actions are poorly understood. Here, we consider two sequence learning mechanisms--transitional probability-based (TP) and position-based encoding…

  12. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  13. A Markov Environment-dependent Hurricane Intensity Model and Its Comparison with Multiple Dynamic Models

    NASA Astrophysics Data System (ADS)

    Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.

    2017-12-01

    A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.

  14. Presenting evidence and summary measures to best inform societal decisions when comparing multiple strategies.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2011-07-01

    Multiple strategy comparisons in health technology assessment (HTA) are becoming increasingly important, with multiple alternative therapeutic actions, combinations of therapies and diagnostic and genetic testing alternatives. Comparison under uncertainty of incremental cost, effects and cost effectiveness across more than two strategies is conceptually and practically very different from that for two strategies, where all evidence can be summarized in a single bivariate distribution on the incremental cost-effectiveness plane. Alternative methods for comparing multiple strategies in HTA have been developed in (i) presenting cost and effects on the cost-disutility plane and (ii) summarizing evidence with multiple strategy cost-effectiveness acceptability (CEA) and expected net loss (ENL) curves and frontiers. However, critical questions remain for the analyst and decision maker of how these techniques can be best employed across multiple strategies to (i) inform clinical and cost inference in presenting evidence, and (ii) summarize evidence of cost effectiveness to inform societal reimbursement decisions where preferences may be risk neutral or somewhat risk averse under the Arrow-Lind theorem. We critically consider how evidence across multiple strategies can be best presented and summarized to inform inference and societal reimbursement decisions, given currently available methods. In the process, we make a number of important original findings. First, in presenting evidence for multiple strategies, the joint distribution of costs and effects on the cost-disutility plane with associated flexible comparators varying across replicates for cost and effect axes ensure full cost and effect inference. Such inference is usually confounded on the cost-effectiveness plane with comparison relative to a fixed origin and axes. Second, in summarizing evidence for risk-neutral societal decision making, ENL curves and frontiers are shown to have advantages over the CEA frontier in directly presenting differences in expected net benefit (ENB). The CEA frontier, while identifying strategies that maximize ENB, only presents their probability of maximizing net benefit (NB) and, hence, fails to explain why strategies maximize ENB at any given threshold value. Third, in summarizing evidence for somewhat risk-averse societal decision making, trade-offs between the strategy maximizing ENB and other potentially optimal strategies with higher probability of maximizing NB should be presented over discrete threshold values where they arise. However, the probabilities informing these trade-offs and associated discrete threshold value regions should be derived from bilateral CEA curves to prevent confounding by other strategies inherent in multiple strategy CEA curves. Based on these findings, a series of recommendations are made for best presenting and summarizing cost-effectiveness evidence for reimbursement decisions when comparing multiple strategies, which are contrasted with advice for comparing two strategies. Implications for joint research and reimbursement decisions are also discussed.

  15. Prioritizing Project Risks Using AHP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thibadeau, Barbara M

    2007-01-01

    This essay introduces the Analytic Hierarchy Process (AHP) as a method by which to rank project risks, in terms of importance as well as likelihood. AHP is way to handle quantifiable and/or intangible criteria in the decision making process. It is a multi-objective multi-criteria decision-making approach that is based on the idea of pair-wise comparisons of alternatives with respect to a given criterion (e.g., which alternative, A or B, is preferred and by how much more is it preferred) or with respect to an objective (e.g., which is more important, A or B, and by how much more is itmore » important). This approach was pioneered by Thomas Saaty in the late 1970's. It has been suggested that a successful project is one that successfully manages risk and that project management is the management of uncertainty. Risk management relies on the quantification of uncertainty which, in turn, is predicated upon the accuracy of probabilistic approaches (in terms of likelihood as well as magnitude). In many cases, the appropriate probability distribution (or probability value) is unknown. And, researchers have shown that probability values are not made very accurately, that the use of verbal expressions is not a suitable alternative, that there is great variability in the use and interpretation of these values and that there is a great reluctance to assign them in the first place. Data from an ongoing project is used to show that AHP can be used to obtain these values, thus overcoming some of the problems associated with the direct assignment of discrete probability values. A novel method by which to calculate the consistency of the data is introduced. The AHP approach is easily implemented and, typically, offers results that are consistent with the decision maker's intuition.« less

  16. An operational real-time flood forecasting system in Southern Italy

    NASA Astrophysics Data System (ADS)

    Ortiz, Enrique; Coccia, Gabriele; Todini, Ezio

    2015-04-01

    A real-time flood forecasting system has been operating since year 2012 as a non-structural measure for mitigating the flood risk in Campania Region (Southern Italy), within the Sele river basin (3.240 km2). The Sele Flood Forecasting System (SFFS) has been built within the FEWS (Flood Early Warning System) platform developed by Deltares and it assimilates the numerical weather predictions of the COSMO LAM family: the deterministic COSMO-LAMI I2, the deterministic COSMO-LAMI I7 and the ensemble numerical weather predictions COSMO-LEPS (16 members). Sele FFS is composed by a cascade of three main models. The first model is a fully continuous physically based distributed hydrological model, named TOPKAPI-eXtended (Idrologia&Ambiente s.r.l., Naples, Italy), simulating the dominant processes controlling the soil water dynamics, runoff generation and discharge with a spatial resolution of 250 m. The second module is a set of Neural-Networks (ANN) built for forecasting the river stages at a set of monitored cross-sections. The third component is a Model Conditional Processor (MCP), which provides the predictive uncertainty (i.e., the probability of occurrence of a future flood event) within the framework of a multi-temporal forecast, according to the most recent advancements on this topic (Coccia and Todini, HESS, 2011). The MCP provides information about the probability of exceedance of a maximum river stage within the forecast lead time, by means of a discrete time function representing the variation of cumulative probability of exceeding a river stage during the forecast lead time and the distribution of the time occurrence of the flood peak, starting from one or more model forecasts. This work shows the Sele FFS performance after two years of operation, evidencing the added-values that can provide to a flood early warning and emergency management system.

  17. Survival or Mortality: Does Risk Attribute Framing Influence Decision-Making Behavior in a Discrete Choice Experiment?

    PubMed

    Veldwijk, Jorien; Essers, Brigitte A B; Lambooij, Mattijs S; Dirksen, Carmen D; Smit, Henriette A; de Wit, G Ardine

    2016-01-01

    To test how attribute framing in a discrete choice experiment (DCE) affects respondents' decision-making behavior and their preferences. Two versions of a DCE questionnaire containing nine choice tasks were distributed among a representative sample of the Dutch population aged 55 to 65 years. The DCE consisted of four attributes related to the decision regarding participation in genetic screening for colorectal cancer (CRC). The risk attribute included was framed positively as the probability of surviving CRC and negatively as the probability of dying from CRC. Panel mixed-logit models were used to estimate the relative importance of the attributes. The data of the positively and negatively framed DCE were compared on the basis of direct attribute ranking, dominant decision-making behavior, preferences, and importance scores. The majority (56%) of the respondents ranked survival as the most important attribute in the positively framed DCE, whereas only a minority (8%) of the respondents ranked mortality as the most important attribute in the negatively framed DCE. Respondents made dominant choices based on survival significantly more often than based on mortality. The framing of the risk attribute significantly influenced all attribute-level estimates and resulted in different preference structures among respondents in the positively and negatively framed data set. Risk framing affects how respondents value the presented risk. Positive risk framing led to increased dominant decision-making behavior, whereas negative risk framing led to risk-seeking behavior. Attribute framing should have a prominent part in the expert and focus group interviews, and different types of framing should be used in the pilot version of DCEs as well as in actual DCEs to estimate the magnitude of the effect of choosing different types of framing. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Transition probability spaces in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Guo, Xiao-Kan

    2018-03-01

    We study the (generalized) transition probability spaces, in the sense of Mielnik and Cantoni, for spacetime quantum states in loop quantum gravity. First, we show that loop quantum gravity admits the structures of transition probability spaces. This is exemplified by first checking such structures in covariant quantum mechanics and then identifying the transition probability spaces in spin foam models via a simplified version of general boundary formulation. The transition probability space thus defined gives a simple way to reconstruct the discrete analog of the Hilbert space of the canonical theory and the relevant quantum logical structures. Second, we show that the transition probability space and in particular the spin foam model are 2-categories. Then we discuss how to realize in spin foam models two proposals by Crane about the mathematical structures of quantum gravity, namely, the quantum topos and causal sites. We conclude that transition probability spaces provide us with an alternative framework to understand various foundational questions of loop quantum gravity.

  19. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  20. Performance on perceptual word identification is mediated by discrete states.

    PubMed

    Swagman, April R; Province, Jordan M; Rouder, Jeffrey N

    2015-02-01

    We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.

  1. Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

  2. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    PubMed

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  3. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  4. The calculation of force-free fields from discrete flux distributions. [for chromospheric magnetic fields

    NASA Technical Reports Server (NTRS)

    Sheeley, N. R., Jr.; Harvey, J. W.

    1975-01-01

    This paper presents particularly simple mathematical formulas for the calculation of force-free fields of constant alpha from the distribution of discrete sources on a flat surface. The advantage of these formulas lies in their physical simplicity and the fact that they can be easily used in practice to calculate the fields. The disadvantage is that they are limited to fields of 'sufficiently small alpha'. These formulas may be useful in the study of chromospheric magnetic fields by the comparison of high-resolution H-alpha photographs and photospheric magnetograms.

  5. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  6. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  7. A Spectral Analysis of Discrete-Time Quantum Walks Related to the Birth and Death Chains

    NASA Astrophysics Data System (ADS)

    Ho, Choon-Lin; Ide, Yusuke; Konno, Norio; Segawa, Etsuo; Takumi, Kentaro

    2018-04-01

    In this paper, we consider a spectral analysis of discrete time quantum walks on the path. For isospectral coin cases, we show that the time averaged distribution and stationary distributions of the quantum walks are described by the pair of eigenvalues of the coins as well as the eigenvalues and eigenvectors of the corresponding random walks which are usually referred as the birth and death chains. As an example of the results, we derive the time averaged distribution of so-called Szegedy's walk which is related to the Ehrenfest model. It is represented by Krawtchouk polynomials which is the eigenvectors of the model and includes the arcsine law.

  8. Distributed mean curvature on a discrete manifold for Regge calculus

    NASA Astrophysics Data System (ADS)

    Conboye, Rory; Miller, Warner A.; Ray, Shannon

    2015-09-01

    The integrated mean curvature of a simplicial manifold is well understood in both Regge Calculus and Discrete Differential Geometry. However, a well motivated pointwise definition of curvature requires a careful choice of the volume over which to uniformly distribute the local integrated curvature. We show that hybrid cells formed using both the simplicial lattice and its circumcentric dual emerge as a remarkably natural structure for the distribution of this local integrated curvature. These hybrid cells form a complete tessellation of the simplicial manifold, contain a geometric orthonormal basis, and are also shown to give a pointwise mean curvature with a natural interpretation as the fractional rate of change of the normal vector.

  9. Load Balancing in Hypergraphs

    NASA Astrophysics Data System (ADS)

    Delgosha, Payam; Anantharam, Venkat

    2018-03-01

    Consider a simple locally finite hypergraph on a countable vertex set, where each edge represents one unit of load which should be distributed among the vertices defining the edge. An allocation of load is called balanced if load cannot be moved from a vertex to another that is carrying less load. We analyze the properties of balanced allocations of load. We extend the concept of balancedness from finite hypergraphs to their local weak limits in the sense of Benjamini and Schramm (Electron J Probab 6(23):13, 2001) and Aldous and Steele (in: Probability on discrete structures. Springer, Berlin, pp 1-72, 2004). To do this, we define a notion of unimodularity for hypergraphs which could be considered an extension of unimodularity in graphs. We give a variational formula for the balanced load distribution and, in particular, we characterize it in the special case of unimodular hypergraph Galton-Watson processes. Moreover, we prove the convergence of the maximum load under some conditions. Our work is an extension to hypergraphs of Anantharam and Salez (Ann Appl Probab 26(1):305-327, 2016), which considered load balancing in graphs, and is aimed at more comprehensively resolving conjectures of Hajek (IEEE Trans Inf Theory 36(6):1398-1414, 1990).

  10. The chronology of reindeer hunting on Norway's highest ice patches

    PubMed Central

    Pilø, Lars; Finstad, Espen; Ramsey, Christopher Bronk; Martinsen, Julian Robert Post; Nesje, Atle; Solli, Brit; Wangen, Vivian; Callanan, Martin

    2018-01-01

    The melting of perennial ice patches globally is uncovering a fragile record of alpine activity, especially hunting and the use of mountain passes. When rescued by systematic fieldwork (glacial archaeology), this evidence opens an unprecedented window on the chronology of high-elevation activity. Recent research in Jotunheimen and surrounding mountain areas of Norway has recovered over 2000 finds—many associated with reindeer hunting (e.g. arrows). We report the radiocarbon dates of 153 objects and use a kernel density estimation (KDE) method to determine the distribution of dated events from ca 4000 BCE to the present. Interpreted in light of shifting environmental, preservation and socio-economic factors, these new data show counterintuitive trends in the intensity of reindeer hunting and other high-elevation activity. Cold temperatures may sometimes have kept humans from Norway's highest elevations, as expected based on accessibility, exposure and reindeer distributions. In times of increasing demand for mountain resources, however, activity probably continued in the face of adverse or variable climatic conditions. The use of KDE modelling makes it possible to observe this patterning without the spurious effects of noise introduced by the discrete nature of the finds and the radiocarbon calibration process. PMID:29410869

  11. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  12. Scattering in discrete random media with implications to propagation through rain. Ph.D. Thesis George Washingtion Univ., Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J., Jr.

    1977-01-01

    The multiple scattering effects on wave propagation through a volume of discrete scatterers were investigated. The mean field and intensity for a distribution of scatterers was developed using a discrete random media formulation, and second order series expansions for the mean field and total intensity derived for one-dimensional and three-dimensional configurations. The volume distribution results were shown to proceed directly from the one-dimensional results. The multiple scattering intensity expansion was compared to the classical single scattering intensity and the classical result was found to represent only the first three terms in the total intensity expansion. The Foldy approximation to the mean field was applied to develop the coherent intensity, and was found to exactly represent all coherent terms of the total intensity.

  13. Research on pyrolysis behavior of Camellia sinensis branches via the Discrete Distributed Activation Energy Model.

    PubMed

    Zhou, Bingliang; Zhou, Jianbin; Zhang, Qisheng

    2017-10-01

    This study aims at investigating the pyrolysis behavior of Camellia sinensis branches by the Discrete Distributed Activation Energy Model (DAEM) and thermogravimetric experiments. Then the Discrete DAEM method is used to describe pyrolysis process of Camellia sinensis branches dominated by 12 characterized reactions. The decomposition mechanism of Camellia sinensis branches and interaction with components are observed. And the reaction at 350.77°C is a significant boundary of the first and second reaction range. The pyrolysis process of Camellia sinensis branches at the heating rate of 10,000°C/min is predicted and provides valuable references for gasification or combustion. The relationship and function between four typical indexes and heating rates from 10 to 10,000°C/min are revealed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  15. The impact of flavour, device type and warning messages on youth preferences for electronic nicotine delivery systems: evidence from an online discrete choice experiment.

    PubMed

    Shang, Ce; Huang, Jidong; Chaloupka, Frank J; Emery, Sherry L

    2017-11-02

    To examine the impact of flavour, device type and health warning messages on youth preference for electronic nicotine delivery systems (ENDS), and to provide evidence and data to inform the Food and Drug Administration's potential regulatory actions on ENDS. An online discrete choice experiment was conducted in September 2015. Each participant was given nine choice sets and asked to choose one out of two alternative ENDS products, with varying characteristics in three attributes (flavour, device type and warning message). The impact of the attributes on the probability of choosing ENDS was analysed using conditional and nested logit regressions, controlling for individual sociodemographic characteristics and current smoking status. A general population sample of 515 participants (50 ever-users and 465 never-users of ENDS) aged 14-17 years were recruited to complete the experiment using an online panel. Fruit/sweets/beverage flavours significantly increase the probability of choosing ENDS among youth (p<0.01 for never-users and <0.1 for ever-users) and flavour has the most pronounced impact among three attributes. Among never-users, menthol flavour also increases (p<0.05) the probability of choosing ENDS compared with tobacco flavour. Vaping devices that are modifiable, compared with cigarette-like e-cigarettes, increase (p<0.05) the probability of choosing ENDS among adolescent never-users. Warning messages reduce (p<0.01) the probability of choosing ENDS among never-users. Restricting fruit/sweets/beverage flavours in ENDS, regulating modifiable vaping devices and adopting strong health warning messages may reduce the uptake of ENDS among youth. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Evaluating the Discrete Element Method as a Tool for Predicting the Seasonal Evolution of the MIZ

    DTIC Science & Technology

    2014-09-30

    distribution (Hopkins & Thorndike 2006). The DEM treats sea ice as a collection of discrete pieces of ice, thus affording the method certain...Annals of Glaciology, 33(1), 355-360. Hopkins, M. A., & Thorndike , A. S. (2006) Floe formation in Arctic sea ice. Journal of Geophysical Research

  17. Using the Nobel Laureates in Economics to Teach Quantitative Methods

    ERIC Educational Resources Information Center

    Becker, William E.; Greene, William H.

    2005-01-01

    The authors show how the work of Nobel Laureates in economics can enhance student understanding and bring them up to date on topics such as probability, uncertainty and decision theory, hypothesis testing, regression to the mean, instrumental variable techniques, discrete choice modeling, and time-series analysis. (Contains 2 notes.)

  18. Beyond Nonutilization: Irrelevant Cues Can Gate Learning in Probabilistic Categorization

    ERIC Educational Resources Information Center

    Little, Daniel R.; Lewandowsky, Stephan

    2009-01-01

    In probabilistic categorization, also known as multiple cue probability learning (MCPL), people learn to predict a discrete outcome on the basis of imperfectly valid cues. In MCPL, normatively irrelevant cues are usually ignored, which stands in apparent conflict with recent research in deterministic categorization that has shown that people…

  19. Core-Plus Mathematics. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2010

    2010-01-01

    "Core-Plus Mathematics" is a four-year curriculum that replaces the traditional sequence with courses that each feature interwoven strands of algebra and functions, statistics and probability, geometry and trigonometry, and discrete mathematics. The first three courses in the series provide a common core of broadly useful mathematics,…

  20. On Discrete-Time Pursuit-Evasion Games with Sensing Limitations

    DTIC Science & Technology

    2008-01-01

    observations in the biology literature by Caraco and Wolf [24] that have reported higher group size in foraging lions during the wet season (prey...H. Myers, and S. L. Myers, Probability and Statistics for Engineers and Scientists, 6th ed. New Jersey, USA: Prentice Hall, 1998. [24] T. Caraco and

  1. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  2. Detecting Land-based Signals in the Near-shore Zone of Lake Erie During Summer 2009

    EPA Science Inventory

    We conducted two styles of nearshore surveys in Lake Erie during August to mid-September 2009. The first used a spatially-balanced probability survey (SBS) design to establish discrete stations within a GIS-defined target populationthe nearshore zone extending approximately 5 km...

  3. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  4. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  5. Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.

    2016-08-01

    The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.

  6. A network of discrete events for the representation and analysis of diffusion dynamics.

    PubMed

    Pintus, Alberto M; Pazzona, Federico G; Demontis, Pierfranco; Suffritti, Giuseppe B

    2015-11-14

    We developed a coarse-grained description of the phenomenology of diffusive processes, in terms of a space of discrete events and its representation as a network. Once a proper classification of the discrete events underlying the diffusive process is carried out, their transition matrix is calculated on the basis of molecular dynamics data. This matrix can be represented as a directed, weighted network where nodes represent discrete events, and the weight of edges is given by the probability that one follows the other. The structure of this network reflects dynamical properties of the process of interest in such features as its modularity and the entropy rate of nodes. As an example of the applicability of this conceptual framework, we discuss here the physics of diffusion of small non-polar molecules in a microporous material, in terms of the structure of the corresponding network of events, and explain on this basis the diffusivity trends observed. A quantitative account of these trends is obtained by considering the contribution of the various events to the displacement autocorrelation function.

  7. Determination of the density of surface states at the semiconductor-insulator interface in a metal-insulator-semiconductor structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulyamov, G., E-mail: Gulyamov1949@rambler.ru; Sharibaev, N. U.

    2011-02-15

    The temporal dependence of thermal generation of electrons from occupied surface states at the semiconductor-insulator interface in a metal-insulator-semiconductor structure is studied. It is established that, at low temperatures, the derivative of the probability of depopulation of occupied surface states with respect to energy is represented by the Dirac {delta} function. It is shown that the density of states of a finite number of discrete energy levels under high-temperature measurements manifests itself as a continuous spectrum, whereas this spectrum appears discrete at low temperatures. A method for processing the continuous spectrum of the density of surface states is suggested thatmore » method makes it possible to determine the discrete energy spectrum. The obtained results may be conducive to an increase in resolution of the method of non-stationary spectroscopy of surface states.« less

  8. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  9. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  10. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  11. Convergence Time towards Periodic Orbits in Discrete Dynamical Systems

    PubMed Central

    San Martín, Jesús; Porter, Mason A.

    2014-01-01

    We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594

  12. Discrete time modelization of human pilot behavior

    NASA Technical Reports Server (NTRS)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  13. Graphite in an Apollo 17 impact melt breccia.

    PubMed

    Steele, A; McCubbin, F M; Fries, M; Glamoclija, M; Kater, L; Nekvasil, H

    2010-07-02

    We report on the detection of discrete grains of crystalline graphite and graphite whiskers (GWs) in an Apollo 17 impact melt breccia. Multiple instances of graphite and GWs within a discrete area of the sample imply that these grains are not terrestrial contamination. Both graphite and GWs are indicative of high-temperature conditions and are probably the result of the impact processes responsible for breccia formation. This suggests that impact processes may be an additional formation mechanism for GWs in the solar system and indicates that the Moon contains a record of ancient carbonaceous material delivered at the time of the Late Heavy Bombardment.

  14. Time-Frequency Domain Analysis of Helicopter Transmission Vibration

    DTIC Science & Technology

    1991-08-01

    Wigner - Ville distribution ( WVD ) have be reported, including speech...FREQUENCY DISTRIBUTIONS . 8 6. THE WIGNER - VILLE DISTRIBUTION . 9 6.1 History. 9 6.2 Definition. 9 6.3 Discrete-Time/Frequency Wigner - Ville Distribution . 10...signals are examined to indicate how various forms of modulation are portrayed using the Wigner - Ville distribution . Practical examples A signal is

  15. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  16. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  17. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  18. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  19. The discrete regime of flame propagation

    NASA Astrophysics Data System (ADS)

    Tang, Francois-David; Goroshin, Samuel; Higgins, Andrew

    The propagation of laminar dust flames in iron dust clouds was studied in a low-gravity envi-ronment on-board a parabolic flight aircraft. The elimination of buoyancy-induced convection and particle settling permitted measurements of fundamental combustion parameters such as the burning velocity and the flame quenching distance over a wide range of particle sizes and in different gaseous mixtures. The discrete regime of flame propagation was observed by substitut-ing nitrogen present in air with xenon, an inert gas with a significantly lower heat conductivity. Flame propagation in the discrete regime is controlled by the heat transfer between neighbor-ing particles, rather than by the particle burning rate used by traditional continuum models of heterogeneous flames. The propagation mechanism of discrete flames depends on the spa-tial distribution of particles, and thus such flames are strongly influenced by local fluctuations in the fuel concentration. Constant pressure laminar dust flames were observed inside 70 cm long, 5 cm diameter Pyrex tubes. Equally-spaced plate assemblies forming rectangular chan-nels were placed inside each tube to determine the quenching distance defined as the minimum channel width through which a flame can successfully propagate. High-speed video cameras were used to measure the flame speed and a fiber optic spectrometer was used to measure the flame temperature. Experimental results were compared with predictions obtained from a numerical model of a three-dimensional flame developed to capture both the discrete nature and the random distribution of particles in the flame. Though good qualitative agreement was obtained between model predictions and experimental observations, residual g-jitters and the short reduced-gravity periods prevented further investigations of propagation limits in the dis-crete regime. The full exploration of the discrete flame phenomenon would require high-quality, long duration reduced gravity environment available only on orbital platforms.

  20. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  1. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  2. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

  4. Transport in simple networks described by an integrable discrete nonlinear Schrödinger equation.

    PubMed

    Nakamura, K; Sobirov, Z A; Matrasulov, D U; Sawada, S

    2011-08-01

    We elucidate the case in which the Ablowitz-Ladik (AL)-type discrete nonlinear Schrödinger equation (NLSE) on simple networks (e.g., star graphs and tree graphs) becomes completely integrable just as in the case of a simple one-dimensional (1D) discrete chain. The strength of cubic nonlinearity is different from bond to bond, and networks are assumed to have at least two semi-infinite bonds with one of them working as an incoming bond. The present work is a nontrivial extension of our preceding one [Sobirov et al., Phys. Rev. E 81, 066602 (2010)] on the continuum NLSE to the discrete case. We find (1) the solution on each bond is a part of the universal (bond-independent) AL soliton solution on the 1D discrete chain, but it is multiplied by the inverse of the square root of bond-dependent nonlinearity; (2) nonlinearities at individual bonds around each vertex must satisfy a sum rule; and (3) under findings 1 and 2, there exist an infinite number of constants of motion. As a practical issue, with the use of an AL soliton injected through the incoming bond, we obtain transmission probabilities inversely proportional to the strength of nonlinearity on the outgoing bonds.

  5. Digitally controlled distributed phase shifter

    DOEpatents

    Hietala, V.M.; Kravitz, S.H.; Vawter, G.A.

    1993-08-17

    A digitally controlled distributed phase shifter is comprised of N phase shifters. Digital control is achieved by using N binary length-weighted electrodes located on the top surface of a waveguide. A control terminal is attached to each electrode thereby allowing the application of a control signal. The control signal is either one or two discrete bias voltages. The application of the discrete bias voltages changes the modal index of a portion of the waveguide that corresponds to a length of the electrode to which the bias voltage is applied, thereby causing the phase to change through the underlying portion of the waveguide. The digitally controlled distributed phase shift network has a total phase shift comprised of the sum of the individual phase shifters.

  6. Digitally controlled distributed phase shifter

    DOEpatents

    Hietala, Vincent M.; Kravitz, Stanley H.; Vawter, Gregory A.

    1993-01-01

    A digitally controlled distributed phase shifter is comprised of N phase shifters. Digital control is achieved by using N binary length-weighted electrodes located on the top surface of a waveguide. A control terminal is attached to each electrode thereby allowing the application of a control signal. The control signal is either one or two discrete bias voltages. The application of the discrete bias voltages changes the modal index of a portion of the waveguide that corresponds to a length of the electrode to which the bias voltage is applied, thereby causing the phase to change through the underlying portion of the waveguide. The digitally controlled distributed phase shift network has a total phase shift comprised of the sum of the individual phase shifters.

  7. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  8. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  9. Deposition conditions and distribution features of native gold individuals in the veins of the Tokur mesothermal deposit, Russia

    NASA Astrophysics Data System (ADS)

    Ostapenko, N. S.; Neroda, O. N.

    2016-05-01

    The paper discusses factors in the deposition and concentration of native gold and the spatial distribution of its individuals within the sufide-poor gold-quartz veins at the mesoabyssal Tokur deposit. The major factors in deposition of gold were sealing of the hydrothermal system, a sudden drop in fluid pressure, and repeated immiscibility in the fluid. Native gold was deposited in relation to initial acts of prolonged and discrete opening and preopening of cavities in three mineral assemblages of the productive association II. Most native gold individuals with a visible size of 0.1-1.5 mm were together with the early generation of quartz 2 on cavity walls adjacent to altered rocks. This is caused by the high content of Au complexes in initial hydrothermal solutions favoring rapid oversaturation during cavity formation. Gold fills interstices between grains of quartz 2 throughout the deposit and mineral assemblages. The vertical-flow distribution of gold has been established in economic veins; the upper and middle levels are enriched in gold, and samples with the greatest gold grade of 100-500 g/t or higher are concentrated there. This is caused both by the predominance of mineral association II at these levels and probable natural flotation of gold grains contained in the gold-gas associate for immiscibility of the hydrothermal fluid at the second stage of the ore-forming process.

  10. Fabric and connectivity as field descriptors for deformations in granular media

    NASA Astrophysics Data System (ADS)

    Wan, Richard; Pouragha, Mehdi

    2015-01-01

    Granular materials involve microphysics across the various scales giving rise to distinct behaviours of geomaterials, such as steady states, plastic limit states, non-associativity of plastic and yield flow, as well as instability of homogeneous deformations through strain localization. Incorporating such micro-scale characteristics is one of the biggest challenges in the constitutive modelling of granular materials, especially when micro-variables may be interdependent. With this motivation, we use two micro-variables such as coordination number and fabric anisotropy computed from tessellation of the granular material to describe its state at the macroscopic level. In order to capture functional dependencies between micro-variables, the correlation between coordination number and fabric anisotropy limits is herein formulated at the particle level rather than on an average sense. This is the essence of the proposed work which investigates the evolutions of coordination number distribution (connectivity) and anisotropy (contact normal) distribution curves with deformation history and their inter-dependencies through discrete element modelling in two dimensions. These results enter as probability distribution functions into homogenization expressions during upscaling to a continuum constitutive model using tessellation as an abstract representation of the granular system. The end product is a micro-mechanically inspired continuum model with both coordination number and fabric anisotropy as underlying micro-variables incorporated into a plasticity flow rule. The derived plastic potential bears striking resemblance to cam-clay or stress-dilatancy-type yield surfaces used in soil mechanics.

  11. Warm Absorbers in X-rays (WAX), a comprehensive high resolution grating spectral study of a sample of Seyfert galaxies

    NASA Astrophysics Data System (ADS)

    Laha, S.; Guainazzi, M.; Dewangan, G.; Chakravorty, S.; Kembhavi, A.

    2014-07-01

    We present results from a homogeneous analysis of the broadband 0.3-10 keV CCD resolution as well as of soft X-ray high-resolution grating spectra of a hard X-ray flux-limited sample of 26 Seyfert galaxies observed with XMM-Newton. We could put a strict lower limit on the detection fraction of 50%. We find a gap in the distribution of the ionisation parameter in the range 0.5

  12. Last-position elimination-based learning automata.

    PubMed

    Zhang, Junqi; Wang, Cheng; Zhou, MengChu

    2014-12-01

    An update scheme of the state probability vector of actions is critical for learning automata (LA). The most popular is the pursuit scheme that pursues the estimated optimal action and penalizes others. This paper proposes a reverse philosophy that leads to last-position elimination-based learning automata (LELA). The action graded last in terms of the estimated performance is penalized by decreasing its state probability and is eliminated when its state probability becomes zero. All active actions, that is, actions with nonzero state probability, equally share the penalized state probability from the last-position action at each iteration. The proposed LELA is characterized by the relaxed convergence condition for the optimal action, the accelerated step size of the state probability update scheme for the estimated optimal action, and the enriched sampling for the estimated nonoptimal actions. The proof of the ϵ-optimal property for the proposed algorithm is presented. Last-position elimination is a widespread philosophy in the real world and has proved to be also helpful for the update scheme of the learning automaton via the simulations of well-known benchmark environments. In the simulations, two versions of the LELA, using different selection strategies of the last action, are compared with the classical pursuit algorithms Discretized Pursuit Reward-Inaction (DP(RI)) and Discretized Generalized Pursuit Algorithm (DGPA). Simulation results show that the proposed schemes achieve significantly faster convergence and higher accuracy than the classical ones. Specifically, the proposed schemes reduce the interval to find the best parameter for a specific environment in the classical pursuit algorithms. Thus, they can have their parameter tuning easier to perform and can save much more time when applied to a practical case. Furthermore, the convergence curves and the corresponding variance coefficient curves of the contenders are illustrated to characterize their essential differences and verify the analysis results of the proposed algorithms.

  13. Risk, Reward, and Decision-Making in a Rodent Model of Cognitive Aging

    PubMed Central

    Gilbert, Ryan J.; Mitchell, Marci R.; Simon, Nicholas W.; Bañuelos, Cristina; Setlow, Barry; Bizon, Jennifer L.

    2011-01-01

    Impaired decision-making in aging can directly impact factors (financial security, health care) that are critical to maintaining quality of life and independence at advanced ages. Naturalistic rodent models mimic human aging in other cognitive domains, and afford the opportunity to parse the effects of age on discrete aspects of decision-making in a manner relatively uncontaminated by experiential factors. Young adult (5–7 months) and aged (23–25 months) male F344 rats were trained on a probability discounting task in which they made discrete-trial choices between a small certain reward (one food pellet) and a large but uncertain reward (two food pellets with varying probabilities of delivery ranging from 100 to 0%). Young rats chose the large reward when it was associated with a high probability of delivery and shifted to the small but certain reward as probability of the large reward decreased. As a group, aged rats performed comparably to young, but there was significantly greater variance among aged rats. One subgroup of aged rats showed strong preference for the small certain reward. This preference was maintained under conditions in which large reward delivery was also certain, suggesting decreased sensitivity to reward magnitude. In contrast, another subgroup of aged rats showed strong preference for the large reward at low probabilities of delivery. Interestingly, this subgroup also showed elevated preference for probabilistic rewards when reward magnitudes were equalized. Previous findings using this same aged study population described strongly attenuated discounting of delayed rewards with age, together suggesting that a subgroup of aged rats may have deficits associated with accounting for reward costs (i.e., delay or probability). These deficits in cost-accounting were dissociable from the age-related differences in sensitivity to reward magnitude, suggesting that aging influences multiple, distinct mechanisms that can impact cost–benefit decision-making. PMID:22319463

  14. Risk, reward, and decision-making in a rodent model of cognitive aging.

    PubMed

    Gilbert, Ryan J; Mitchell, Marci R; Simon, Nicholas W; Bañuelos, Cristina; Setlow, Barry; Bizon, Jennifer L

    2011-01-01

    Impaired decision-making in aging can directly impact factors (financial security, health care) that are critical to maintaining quality of life and independence at advanced ages. Naturalistic rodent models mimic human aging in other cognitive domains, and afford the opportunity to parse the effects of age on discrete aspects of decision-making in a manner relatively uncontaminated by experiential factors. Young adult (5-7 months) and aged (23-25 months) male F344 rats were trained on a probability discounting task in which they made discrete-trial choices between a small certain reward (one food pellet) and a large but uncertain reward (two food pellets with varying probabilities of delivery ranging from 100 to 0%). Young rats chose the large reward when it was associated with a high probability of delivery and shifted to the small but certain reward as probability of the large reward decreased. As a group, aged rats performed comparably to young, but there was significantly greater variance among aged rats. One subgroup of aged rats showed strong preference for the small certain reward. This preference was maintained under conditions in which large reward delivery was also certain, suggesting decreased sensitivity to reward magnitude. In contrast, another subgroup of aged rats showed strong preference for the large reward at low probabilities of delivery. Interestingly, this subgroup also showed elevated preference for probabilistic rewards when reward magnitudes were equalized. Previous findings using this same aged study population described strongly attenuated discounting of delayed rewards with age, together suggesting that a subgroup of aged rats may have deficits associated with accounting for reward costs (i.e., delay or probability). These deficits in cost-accounting were dissociable from the age-related differences in sensitivity to reward magnitude, suggesting that aging influences multiple, distinct mechanisms that can impact cost-benefit decision-making.

  15. The effectiveness of robotic training depends on motor task characteristics.

    PubMed

    Marchal-Crespo, Laura; Rappo, Nicole; Riener, Robert

    2017-12-01

    Previous research suggests that the effectiveness of robotic training depends on the motor task to be learned. However, it is still an open question which specific task's characteristics influence the efficacy of error-modulating training strategies. Motor tasks can be classified based on the time characteristics of the task, in particular the task's duration (discrete vs. continuous). Continuous tasks require movements without distinct beginning or end. Discrete tasks require fast movements that include well-defined postures at the beginning and the end. We developed two games, one that requires a continuous movement-a tracking task-and one that requires discrete movements-a fast reaching task. We conducted an experiment with thirty healthy subjects to evaluate the effectiveness of three error-modulating training strategies-no guidance, error amplification (i.e., repulsive forces proportional to errors) and haptic guidance-on self-reported motivation and learning of the continuous and discrete games. Training with error amplification resulted in better motor learning than haptic guidance, besides the fact that error amplification reduced subjects' interest/enjoyment and perceived competence during training. Only subjects trained with error amplification improved their performance after training the discrete game. In fact, subjects trained without guidance improved the performance in the continuous game significantly more than in the discrete game, probably because the continuous task required greater attentional levels. Error-amplifying training strategies have a great potential to provoke better motor learning in continuous and discrete tasks. However, their long-lasting negative effects on motivation might limit their applicability in intense neurorehabilitation programs.

  16. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  17. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  18. Utilization of Historic Information in an Optimisation Task

    NASA Technical Reports Server (NTRS)

    Boesser, T.

    1984-01-01

    One of the basic components of a discrete model of motor behavior and decision making, which describes tracking and supervisory control in unitary terms, is assumed to be a filtering mechanism which is tied to the representational principles of human memory for time-series information. In a series of experiments subjects used the time-series information with certain significant limitations: there is a range-effect; asymmetric distributions seem to be recognized, but it does not seem to be possible to optimize performance based on skewed distributions. Thus there is a transformation of the displayed data between the perceptual system and representation in memory involving a loss of information. This rules out a number of representational principles for time-series information in memory and fits very well into the framework of a comprehensive discrete model for control of complex systems, modelling continuous control (tracking), discrete responses, supervisory behavior and learning.

  19. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  20. Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise

    NASA Astrophysics Data System (ADS)

    Morita, Satoru

    2018-05-01

    Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.

  1. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  2. LMI-based approach to stability analysis for fractional-order neural networks with discrete and distributed delays

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Ye, Renyu; Liu, Song; Cao, Jinde; Alsaedi, Ahmad; Li, Xiaodi

    2018-02-01

    This paper is concerned with the asymptotic stability of the Riemann-Liouville fractional-order neural networks with discrete and distributed delays. By constructing a suitable Lyapunov functional, two sufficient conditions are derived to ensure that the addressed neural network is asymptotically stable. The presented stability criteria are described in terms of the linear matrix inequalities. The advantage of the proposed method is that one may avoid calculating the fractional-order derivative of the Lyapunov functional. Finally, a numerical example is given to show the validity and feasibility of the theoretical results.

  3. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE PAGES

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.; ...

    2017-01-01

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  4. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution.

    PubMed

    Cai, Hong; Long, Christopher M; DeRose, Christopher T; Boynton, Nicholas; Urayama, Junji; Camacho, Ryan; Pomerene, Andrew; Starbuck, Andrew L; Trotter, Douglas C; Davids, Paul S; Lentine, Anthony L

    2017-05-29

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  5. Silicon photonic transceiver circuit for high-speed polarization-based discrete variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Hong; Long, Christopher M.; DeRose, Christopher T.

    We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.

  6. Pattern Storage, Bifurcations, and Groupwise Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model.

    PubMed

    Fasoli, Diego; Cattani, Anna; Panzeri, Stefano

    2018-05-01

    Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.

  7. Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.

    2013-12-01

    We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.

  8. Geomorphic Controls on Aquifer Geometry in Northwestern India

    NASA Astrophysics Data System (ADS)

    van Dijk, W. M.; Densmore, A. L.; Sinha, R.; Gupta, S.; Mason, P. J.; Singh, A.; Joshi, S. K.; Nayak, N.; Kumar, M.; Shekhar, S.

    2014-12-01

    The Indo-Gangetic foreland basin suffers from one of the highest rates of groundwater extraction in the world, especially in the Indian states of Punjab, Haryana and Rajasthan. To understand the effects of this extraction on ground water levels, we must first understand the geometry and sedimentary architecture of the aquifer system, which in turn depend upon its geomorphic setting. We use satellite images and digital elevation models to map the geomorphology of the Sutlej and Yamuna river systems, while aquifer geometry is assessed using ~250 wells that extend to ~300 m depth in Punjab and Haryana. The Sutlej and Yamuna rivers have deposited large sedimentary fans at their outlets. Elongate downslope ridges on the fan surfaces form distributary networks that radiate from the Sutlej and Yamuna fan apices, and we interpret these ridges as paleochannel deposits associated with discrete fan lobes. Paleochannels picked out by soil moisture variations illustrate a complex late Quaternary history of channel avulsion and incision, probably associated with variations in monsoon intensity. Aquifer bodies on the Sutlej and Yamuna fans have a median thickness of 7 and 6 m, respectively, and follow a heavy-tailed distribution, probably because of stacked sand bodies. The percentage of aquifer material in individual lithologs decreases downstream, although the exponent on the thickness distribution remains the same, indicating that aquifer bodies decrease in number down fan but do not thin appreciably. Critically, the interfan area between the Sutlej and Yamuna fans has thinner aquifers and a lower proportion of aquifer material, despite its proximal location. Our data show that the Sutlej and Yamuna fan systems form the major aquifer systems in this area, and that their geomorphic setting therefore provides a first-order control on aquifer distribution and geometry. The large spatial heterogeneity of the system must be considered in any future aquifer management scheme.

  9. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  10. A Generalization of "n Choose r"

    ERIC Educational Resources Information Center

    Skurnick, Ronald

    2005-01-01

    The subject matter presented in this article can be used in the classroom to enrich the learning experience of students taking a course that includes a unit on combinatorics, such as discrete mathematics, graph theory, or probability. In order to provide such students with the background needed to appreciate the significance of the generalization…

  11. Everything You Always Wanted to Know About the Mathematics of Sex and Family Planning...But Were Afraid to Calculate

    ERIC Educational Resources Information Center

    Meyer, Rochelle Wilson

    1978-01-01

    The author uses mathematical models that involve only algebra and a few basic ideas in discrete probability to describe the frequency of conception in large human societies. A number of calculations which can be done by students as exercises are given. (MN)

  12. Secret Snowflake: Analysis of a Holiday Gift Exchange

    ERIC Educational Resources Information Center

    Turton, Roger W.

    2007-01-01

    This article describes several methods from discrete mathematics used to simulate and solve an interesting problem occurring at a holiday gift exchange. What is the probability that two people will select each other's names in a random drawing, and how does this result vary with the total number of participants? (Contains 5 figures.)

  13. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  14. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  15. Micro-macro correlations and anisotropy in granular assemblies under uniaxial loading and unloading.

    PubMed

    Imole, Olukayode I; Wojtkowski, Mateusz; Magnanimo, Vanessa; Luding, Stefan

    2014-04-01

    The influence of contact friction on the behavior of dense, polydisperse granular assemblies under uniaxial (oedometric) loading and unloading deformation is studied using discrete element simulations. Even though the uniaxial deformation protocol is one of the "simplest" element tests possible, the evolution of the structural anisotropy necessitates its careful analysis and understanding, since it is the source of interesting and unexpected observations. On the macroscopic, homogenized, continuum scale, the deviatoric stress ratio and the deviatoric fabric, i.e., the microstructure behave in a different fashion during uniaxial loading and unloading. The maximal stress ratio and strain increase with increasing contact friction. In contrast, the deviatoric fabric reaches its maximum at a unique strain level independent of friction, with the maximal value decreasing with friction. For unloading, both stress and fabric respond to unloading strain with a friction-dependent delay but at different strains. On the micro-level, a friction-dependent non-symmetry of the proportion of weak (strong) and sliding (sticking) contacts with respect to the total contacts during loading and unloading is observed. Coupled to this, from the directional probability distribution, the "memory" and history-dependent behavior of granular systems is confirmed. Surprisingly, while a rank-2 tensor is sufficient to describe the evolution of the normal force directions, a sixth order harmonic approximation is necessary to describe the probability distribution of contacts, tangential force, and mobilized friction. We conclude that the simple uniaxial deformation activates microscopic phenomena not only in the active Cartesian directions, but also at intermediate orientations, with the tilt angle being dependent on friction, so that this microstructural features cause the interesting, nontrivial macroscopic behavior.

  16. Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)

    NASA Astrophysics Data System (ADS)

    Kauffmann, Paul J.

    1994-12-01

    The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.

  17. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  18. Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)

    NASA Technical Reports Server (NTRS)

    Kauffmann, Paul J.

    1994-01-01

    The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.

  19. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  20. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

Top