Sample records for poisson process based

  1. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  2. Simulation Methods for Poisson Processes in Nonstationary Systems.

    DTIC Science & Technology

    1978-08-01

    for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is

  3. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sree, D.; Kjelgaard, S.O.; Sellers, W.L. III

    1994-12-01

    Spectral estimations from LV data are typically based on the assumption of a Poisson sampling process. It is demonstrated here that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales. A non-Poisson sampling process can occur if there is nonhomogeneous distribution of particles in the flow. Based on the study of a simulated first-order spectrum, it has been shown that a non-Poisson sampling process causes the estimated spectrum to deviate from the true spectrum. Also, in this case the prefiltering techniques do not improve the spectral estimates at higher frequencies. 4 refs.

  4. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts

    NASA Astrophysics Data System (ADS)

    Abujiya, Mu'azu Ramat

    2017-11-01

    The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.

  5. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  6. Dependent Neyman type A processes based on common shock Poisson approach

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Kadilar, Cem

    2016-04-01

    The Neyman type A process is used for describing clustered data since the Poisson process is insufficient for clustering of events. In a multivariate setting, there may be dependencies between multivarite Neyman type A processes. In this study, dependent form of the Neyman type A process is considered under common shock approach. Then, the joint probability function are derived for the dependent Neyman type A Poisson processes. Then, an application based on forest fires in Turkey are given. The results show that the joint probability function of the dependent Neyman type A processes, which is obtained in this study, can be a good tool for the probabilistic fitness for the total number of burned trees in Turkey.

  7. Complete synchronization of the global coupled dynamical network induced by Poisson noises.

    PubMed

    Guo, Qing; Wan, Fangyi

    2017-01-01

    The different Poisson noise-induced complete synchronization of the global coupled dynamical network is investigated. Based on the stability theory of stochastic differential equations driven by Poisson process, we can prove that Poisson noises can induce synchronization and sufficient conditions are established to achieve complete synchronization with probability 1. Furthermore, numerical examples are provided to show the agreement between theoretical and numerical analysis.

  8. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  9. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  10. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  11. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  12. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  13. A Novel Method for Preparing Auxetic Foam from Closed-cell Polymer Foam Based on Steam Penetration and Condensation (SPC) Process.

    PubMed

    Fan, Donglei; Li, Minggang; Qiu, Jian; Xing, Haiping; Jiang, Zhiwei; Tang, Tao

    2018-05-31

    Auxetic materials are a class of materials possessing negative Poisson's ratio. Here we establish a novel method for preparing auxetic foam from closed-cell polymer foam based on steam penetration and condensation (SPC) process. Using polyethylene (PE) closed-cell foam as an example, the resultant foams treated by SPC process present negative Poisson's ratio during stretching and compression testing. The effect of steam-treated temperature and time on the conversion efficiency of negative Poisson's ratio foam is investigated, and the mechanism of SPC method for forming re-entrant structure is discussed. The results indicate that the presence of enough steam within the cells is a critical factor for the negative Poisson's ratio conversion in the SPC process. The pressure difference caused by steam condensation is the driving force for the conversion from conventional closed-cell foam to the negative Poisson's ratio foam. Furthermore, the applicability of SPC process for fabricating auxetic foam is studied by replacing PE foam by polyvinyl chloride (PVC) foam with closed-cell structure or replacing water steam by ethanol steam. The results verify the universality of SPC process for fabricating auxetic foams from conventional foams with closed-cell structure. In addition, we explored potential application of the obtained auxetic foams by SPC process in the fabrication of shape memory polymer materials.

  14. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  15. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  16. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  17. Noisy cooperative intermittent processes: From blinking quantum dots to human consciousness

    NASA Astrophysics Data System (ADS)

    Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Bedini, Remo; Gemignani, Angelo; Fronzoni, Leone

    2011-07-01

    We study the superposition of a non-Poisson renewal process with the presence of a superimposed Poisson noise. The non-Poisson renewals mark the passage between meta-stable states in system with self-organization. We propose methods to measure the amount of information due to the two independent processes independently, and we see that a superficial study based on the survival probabilities yield stretched-exponential relaxations. Our method is in fact able to unravel the inverse-power law relaxation of the isolated non-Poisson processes, even when noise is present. We provide examples of this behavior in system of diverse nature, from blinking nano-crystals to weak turbulence. Finally we focus our discussion on events extracted from human electroencephalograms, and we discuss their connection with emerging properties of integrated neural dynamics, i.e. consciousness.

  18. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  19. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  20. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  1. Characterization of Nonhomogeneous Poisson Processes Via Moment Conditions.

    DTIC Science & Technology

    1986-08-01

    Poisson processes play an important role in many fields. The Poisson process is one of the simplest counting processes and is a building block for...place of independent increments. This provides a somewhat different viewpoint for examining Poisson processes . In addition, new characterizations for

  2. Pattern analysis of community health center location in Surabaya using spatial Poisson point process

    NASA Astrophysics Data System (ADS)

    Kusumaningrum, Choriah Margareta; Iriawan, Nur; Winahju, Wiwiek Setya

    2017-11-01

    Community health center (puskesmas) is one of the closest health service facilities for the community, which provide healthcare for population on sub-district level as one of the government-mandated community health clinics located across Indonesia. The increasing number of this puskesmas does not directly comply the fulfillment of basic health services needed in such region. Ideally, a puskesmas has to cover up to maximum 30,000 people. The number of puskesmas in Surabaya indicates an unbalance spread in all of the area. This research aims to analyze the spread of puskesmas in Surabaya using spatial Poisson point process model in order to get the effective location of Surabaya's puskesmas which based on their location. The results of the analysis showed that the distribution pattern of puskesmas in Surabaya is non-homogeneous Poisson process and is approched by mixture Poisson model. Based on the estimated model obtained by using Bayesian mixture model couple with MCMC process, some characteristics of each puskesmas have no significant influence as factors to decide the addition of health center in such location. Some factors related to the areas of sub-districts have to be considered as covariate to make a decision adding the puskesmas in Surabaya.

  3. Developing an economical and reliable test for measuring the resilient modulus and Poisson's ratio of subgrade.

    DOT National Transportation Integrated Search

    2010-11-01

    The resilient modulus and Poissons ratio of base and sublayers in highway use are : important parameters in design and quality control process. The currently used techniques : include CBR (California Bearing Ratio) test, resilient modulus test,...

  4. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  5. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  6. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  7. Fractional Poisson Fields and Martingales

    NASA Astrophysics Data System (ADS)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  8. A Martingale Characterization of Mixed Poisson Processes.

    DTIC Science & Technology

    1985-10-01

    03LA A 11. TITLE (Inciuae Security Clanafication, ",A martingale characterization of mixed Poisson processes " ________________ 12. PERSONAL AUTHOR... POISSON PROCESSES Jostification .......... . ... . . Di.;t ib,,jtion by Availability Codes Dietmar Pfeifer* Technical University Aachen Dist Special and...Mixed Poisson processes play an important role in many branches of applied probability, for instance in insurance mathematics and physics (see Albrecht

  9. Stationary and non-stationary occurrences of miniature end plate potentials are well described as stationary and non-stationary Poisson processes in the mollusc Navanax inermis.

    PubMed

    Cappell, M S; Spray, D C; Bennett, M V

    1988-06-28

    Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.

  10. Probabilistic Estimation of Rare Random Collisions in 3 Space

    DTIC Science & Technology

    2009-03-01

    extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time

  11. Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process

    NASA Astrophysics Data System (ADS)

    Konno, Hidetoshi; Tamura, Yoshiyasu

    2018-01-01

    In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).

  12. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  13. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    DTIC Science & Technology

    2004-07-07

    ARTICLE IN PRESSPhysica A 347 (2005) 268–2880378-4371/$ - doi:10.1016/j Correspo E-mail adwww.elsevier.com/locate/physaNon- Poisson processes : regression...05.40.a; 89.75.k; 02.50.Ey Keywords: Stochastic processes; Non- Poisson processes ; Liouville and Liouville-like equations; Correlation function...which is not legitimate with renewal non- Poisson processes , is a correct property if the deviation from the exponential relaxation is obtained by time

  14. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment

    PubMed Central

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso

    2016-01-01

    ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547

  15. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2017-02-15

    Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.

  16. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  17. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  18. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  19. Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence

    NASA Astrophysics Data System (ADS)

    Cerqueti, Roy; Fenga, Livio; Ventura, Marco

    2018-06-01

    This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.

  20. Fractional Brownian motion and long term clinical trial recruitment

    PubMed Central

    Zhang, Qiang; Lai, Dejian

    2015-01-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306

  1. Fractional Brownian motion and long term clinical trial recruitment.

    PubMed

    Zhang, Qiang; Lai, Dejian

    2011-05-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.

  2. Graphic Simulations of the Poisson Process.

    DTIC Science & Technology

    1982-10-01

    RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value

  3. Identification of a Class of Filtered Poisson Processes.

    DTIC Science & Technology

    1981-01-01

    LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown

  4. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  5. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  6. A dictionary learning approach for Poisson image deblurring.

    PubMed

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  7. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  8. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  9. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  10. Morphology and linear-elastic moduli of random network solids.

    PubMed

    Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E

    2011-06-17

    The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  12. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes).

    DTIC Science & Technology

    1983-05-20

    Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author)

  13. From Loss of Memory to Poisson.

    ERIC Educational Resources Information Center

    Johnson, Bruce R.

    1983-01-01

    A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)

  14. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  15. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  16. Weak convergence to isotropic complex [Formula: see text] random measure.

    PubMed

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  17. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

  18. FIND: difFerential chromatin INteractions Detection using a spatial Poisson process

    PubMed Central

    Chen, Yang; Zhang, Michael Q.

    2018-01-01

    Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. PMID:29440282

  19. Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Gillis, T.; Winckelmans, G.; Chatelain, P.

    2018-02-01

    We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.

  20. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method

    PubMed Central

    Zhang, Tingting; Kou, S. C.

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615

  1. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    PubMed

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  2. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  3. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  4. Generic Schemes for Single-Molecule Kinetics. 2: Information Content of the Poisson Indicator.

    PubMed

    Avila, Thomas R; Piephoff, D Evan; Cao, Jianshu

    2017-08-24

    Recently, we described a pathway analysis technique (paper 1) for analyzing generic schemes for single-molecule kinetics based upon the first-passage time distribution. Here, we employ this method to derive expressions for the Poisson indicator, a normalized measure of stochastic variation (essentially equivalent to the Fano factor and Mandel's Q parameter), for various renewal (i.e., memoryless) enzymatic reactions. We examine its dependence on substrate concentration, without assuming all steps follow Poissonian kinetics. Based upon fitting to the functional forms of the first two waiting time moments, we show that, to second order, the non-Poissonian kinetics are generally underdetermined but can be specified in certain scenarios. For an enzymatic reaction with an arbitrary intermediate topology, we identify a generic minimum of the Poisson indicator as a function of substrate concentration, which can be used to tune substrate concentration to the stochastic fluctuations and to estimate the largest number of underlying consecutive links in a turnover cycle. We identify a local maximum of the Poisson indicator (with respect to substrate concentration) for a renewal process as a signature of competitive binding, either between a substrate and an inhibitor or between multiple substrates. Our analysis explores the rich connections between Poisson indicator measurements and microscopic kinetic mechanisms.

  5. Super-stable Poissonian structures

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.

  6. Approximations to camera sensor noise

    NASA Astrophysics Data System (ADS)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  7. The Use of Crow-AMSAA Plots to Assess Mishap Trends

    NASA Technical Reports Server (NTRS)

    Dawson, Jeffrey W.

    2011-01-01

    Crow-AMSAA (CA) plots are used to model reliability growth. Use of CA plots has expanded into other areas, such as tracking events of interest to management, maintenance problems, and safety mishaps. Safety mishaps can often be successfully modeled using a Poisson probability distribution. CA plots show a Poisson process in log-log space. If the safety mishaps are a stable homogenous Poisson process, a linear fit to the points in a CA plot will have a slope of one. Slopes of greater than one indicate a nonhomogenous Poisson process, with increasing occurrence. Slopes of less than one indicate a nonhomogenous Poisson process, with decreasing occurrence. Changes in slope, known as "cusps," indicate a change in process, which could be an improvement or a degradation. After presenting the CA conceptual framework, examples are given of trending slips, trips and falls, and ergonomic incidents at NASA (from Agency-level data). Crow-AMSAA plotting is a robust tool for trending safety mishaps that can provide insight into safety performance over time.

  8. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  9. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  10. A random-censoring Poisson model for underreported data.

    PubMed

    de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins

    2017-12-30

    A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  12. FIND: difFerential chromatin INteractions Detection using a spatial Poisson process.

    PubMed

    Djekidel, Mohamed Nadhir; Chen, Yang; Zhang, Michael Q

    2018-02-12

    Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. © 2018 Djekidel et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  14. Poisson process stimulation of an excitable membrane cable model.

    PubMed Central

    Goldfinger, M D

    1986-01-01

    The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505

  15. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  16. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  18. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  19. Pumped shot noise in adiabatically modulated graphene-based double-barrier structures.

    PubMed

    Zhu, Rui; Lai, Maoli

    2011-11-16

    Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.

  20. Pumped shot noise in adiabatically modulated graphene-based double-barrier structures

    NASA Astrophysics Data System (ADS)

    Zhu, Rui; Lai, Maoli

    2011-11-01

    Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.

  1. Limiting Distributions of Functionals of Markov Chains.

    DTIC Science & Technology

    1984-08-01

    limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson

  2. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  3. Wigner surmises and the two-dimensional homogeneous Poisson point process.

    PubMed

    Sakhr, Jamal; Nieminen, John M

    2006-04-01

    We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.

  4. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  5. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  6. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  7. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  8. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    NASA Technical Reports Server (NTRS)

    Sree, Dave; Kjelgaard, Scott O.; Sellers, William L., III

    1994-01-01

    Spectral analysis of laser velocimetry (LV) data plays an important role in characterizing a turbulent flow and in estimating the associated turbulence scales, which can be helpful in validating theoretical and numerical turbulence models. The determination of turbulence scales is critically dependent on the accuracy of the spectral estimates. Spectral estimations from 'individual realization' laser velocimetry data are typically based on the assumption of a Poisson sampling process. What this Note has demonstrated is that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales.

  9. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  10. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  11. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  12. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  13. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  14. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  15. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  16. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less

  17. Statistical properties of superimposed stationary spike trains.

    PubMed

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

  18. Nambu-Poisson gauge theory

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-06-01

    We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.

  19. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  20. The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Everett, James E.

    1993-01-01

    Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…

  1. Analyzing hospitalization data: potential limitations of Poisson regression.

    PubMed

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  2. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  3. Harmonic statistics

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  4. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    PubMed

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  6. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  7. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  8. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  9. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  10. Discrimination of shot-noise-driven Poisson processes by external dead time - Application of radioluminescence from glass

    NASA Technical Reports Server (NTRS)

    Saleh, B. E. A.; Tavolacci, J. T.; Teich, M. C.

    1981-01-01

    Ways in which dead time can be used to constructively enhance or diminish the effects of point processes that display bunching in the shot-noise-driven doubly stochastic Poisson point process (SNDP) are discussed. Interrelations between photocount bunching arising in the SNDP and the antibunching character arising from dead-time effects are investigated. It is demonstrated that the dead-time-modified count mean and variance for an arbitrary doubly stochastic Poisson point process can be obtained from the Laplace transform of the single-fold and joint-moment-generating functions for the driving rate process. The theory is in good agreement with experimental values for radioluminescence radiation in fused silica, quartz, and glass, and the process has many applications in pulse, particle, and photon detection.

  11. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  12. Exact solution for the Poisson field in a semi-infinite strip.

    PubMed

    Cohen, Yossi; Rothman, Daniel H

    2017-04-01

    The Poisson equation is associated with many physical processes. Yet exact analytic solutions for the two-dimensional Poisson field are scarce. Here we derive an analytic solution for the Poisson equation with constant forcing in a semi-infinite strip. We provide a method that can be used to solve the field in other intricate geometries. We show that the Poisson flux reveals an inverse square-root singularity at a tip of a slit, and identify a characteristic length scale in which a small perturbation, in a form of a new slit, is screened by the field. We suggest that this length scale expresses itself as a characteristic spacing between tips in real Poisson networks that grow in response to fluxes at tips.

  13. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  14. Estimating the intensity of a cyclic Poisson process in the presence of additive and multiplicative linear trend

    NASA Astrophysics Data System (ADS)

    Wayan Mangku, I.

    2017-10-01

    In this paper we survey some results on estimation of the intensity function of a cyclic Poisson process in the presence of additive and multiplicative linear trend. We do not assume any parametric form for the cyclic component of the intensity function, except that it is periodic. Moreover, we consider the case when there is only a single realization of the Poisson process is observed in a bounded interval. The considered estimators are weakly and strongly consistent when the size of the observation interval indefinitely expands. Asymptotic approximations to the bias and variance of those estimators are presented.

  15. Gene regulation and noise reduction by coupling of stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  16. Gene regulation and noise reduction by coupling of stochastic processes

    PubMed Central

    Hornos, José Eduardo M.; Reinitz, John

    2015-01-01

    Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447

  17. Gene regulation and noise reduction by coupling of stochastic processes.

    PubMed

    Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  18. Poisson-event-based analysis of cell proliferation.

    PubMed

    Summers, Huw D; Wills, John W; Brown, M Rowan; Rees, Paul

    2015-05-01

    A protocol for the assessment of cell proliferation dynamics is presented. This is based on the measurement of cell division events and their subsequent analysis using Poisson probability statistics. Detailed analysis of proliferation dynamics in heterogeneous populations requires single cell resolution within a time series analysis and so is technically demanding to implement. Here, we show that by focusing on the events during which cells undergo division rather than directly on the cells themselves a simplified image acquisition and analysis protocol can be followed, which maintains single cell resolution and reports on the key metrics of cell proliferation. The technique is demonstrated using a microscope with 1.3 μm spatial resolution to track mitotic events within A549 and BEAS-2B cell lines, over a period of up to 48 h. Automated image processing of the bright field images using standard algorithms within the ImageJ software toolkit yielded 87% accurate recording of the manually identified, temporal, and spatial positions of the mitotic event series. Analysis of the statistics of the interevent times (i.e., times between observed mitoses in a field of view) showed that cell division conformed to a nonhomogeneous Poisson process in which the rate of occurrence of mitotic events, λ exponentially increased over time and provided values of the mean inter mitotic time of 21.1 ± 1.2 hours for the A549 cells and 25.0 ± 1.1 h for the BEAS-2B cells. Comparison of the mitotic event series for the BEAS-2B cell line to that predicted by random Poisson statistics indicated that temporal synchronisation of the cell division process was occurring within 70% of the population and that this could be increased to 85% through serum starvation of the cell culture. © 2015 International Society for Advancement of Cytometry.

  19. Sojourning with the Homogeneous Poisson Process.

    PubMed

    Liu, Piaomu; Peña, Edsel A

    2016-01-01

    In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.

  20. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  1. Cumulative sum control charts for monitoring geometrically inflated Poisson processes: An application to infectious disease counts data.

    PubMed

    Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E

    2018-02-01

    In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.

  2. The impact of short term synaptic depression and stochastic vesicle dynamics on neuronal variability

    PubMed Central

    Reich, Steven

    2014-01-01

    Neuronal variability plays a central role in neural coding and impacts the dynamics of neuronal networks. Unreliability of synaptic transmission is a major source of neural variability: synaptic neurotransmitter vesicles are released probabilistically in response to presynaptic action potentials and are recovered stochastically in time. The dynamics of this process of vesicle release and recovery interacts with variability in the arrival times of presynaptic spikes to shape the variability of the postsynaptic response. We use continuous time Markov chain methods to analyze a model of short term synaptic depression with stochastic vesicle dynamics coupled with three different models of presynaptic spiking: one model in which the timing of presynaptic action potentials are modeled as a Poisson process, one in which action potentials occur more regularly than a Poisson process (sub-Poisson) and one in which action potentials occur more irregularly (super-Poisson). We use this analysis to investigate how variability in a presynaptic spike train is transformed by short term depression and stochastic vesicle dynamics to determine the variability of the postsynaptic response. We find that sub-Poisson presynaptic spiking increases the average rate at which vesicles are released, that the number of vesicles released over a time window is more variable for smaller time windows than larger time windows and that fast presynaptic spiking gives rise to Poisson-like variability of the postsynaptic response even when presynaptic spike times are non-Poisson. Our results complement and extend previously reported theoretical results and provide possible explanations for some trends observed in recorded data. PMID:23354693

  3. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. A Behavioral Theory of Timing.

    ERIC Educational Resources Information Center

    Killeen, Peter R.; Fetterman, J. Gregor

    1988-01-01

    A theory of timing is proposed, based on the observation that signals of reinforcement elicit adjunctive behaviors. Transitions between these behaviors are described as a Poisson process. These behaviors may come to serve as the basis for conditional discriminations of the passage of time. (SLD)

  5. A poisson process model for hip fracture risk.

    PubMed

    Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S

    2010-08-01

    The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.

  6. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

    ERIC Educational Resources Information Center

    Rash, Agnes M.; Winkel, Brian J.

    2009-01-01

    This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

  7. Physical properties of biophotons and their biological functions.

    PubMed

    Chang, Jiin-Ju

    2008-05-01

    Biophotons (BPHs) are weak photons within or emitted from living organisms. The intensities of BPHs range from a few to several hundred photons s(-1) x cm(-2). BPH emission originates from a de-localized coherent electromagnetic field within the living organisms and is regulated by the field. In this paper based on the experimental results of Poisson and sub-Poisson distributions of photocount statistics, the coherent properties of BPHs and their functions in cell communication are described. Discussions are made on functions which BPHs may play in DNA and proteins functioning including the process of DNA replication, protein synthesis and cell signalling and in oxidative phosporylation and photosynthesis.

  8. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  9. The Evolution of Hyperedge Cardinalities and Bose-Einstein Condensation in Hypernetworks.

    PubMed

    Guo, Jin-Li; Suo, Qi; Shen, Ai-Zhong; Forrest, Jeffrey

    2016-09-27

    To depict the complex relationship among nodes and the evolving process of a complex system, a Bose-Einstein hypernetwork is proposed in this paper. Based on two basic evolutionary mechanisms, growth and preference jumping, the distribution of hyperedge cardinalities is studied. The Poisson process theory is used to describe the arrival process of new node batches. And, by using the Poisson process theory and a continuity technique, the hypernetwork is analyzed and the characteristic equation of hyperedge cardinalities is obtained. Additionally, an analytical expression for the stationary average hyperedge cardinality distribution is derived by employing the characteristic equation, from which Bose-Einstein condensation in the hypernetwork is obtained. The theoretical analyses in this paper agree with the conducted numerical simulations. This is the first study on the hyperedge cardinality in hypernetworks, where Bose-Einstein condensation can be regarded as a special case of hypernetworks. Moreover, a condensation degree is also discussed with which Bose-Einstein condensation can be classified.

  10. Slow diffusion by Markov random flights

    NASA Astrophysics Data System (ADS)

    Kolesnik, Alexander D.

    2018-06-01

    We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.

  11. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  12. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.

    PubMed

    Matsui, Shigeyuki

    2005-05-15

    This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.

  13. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    PubMed

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less

  15. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    PubMed

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  17. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  18. Poisson geometry from a Dirac perspective

    NASA Astrophysics Data System (ADS)

    Meinrenken, Eckhard

    2018-03-01

    We present proofs of classical results in Poisson geometry using techniques from Dirac geometry. This article is based on mini-courses at the Poisson summer school in Geneva, June 2016, and at the workshop Quantum Groups and Gravity at the University of Waterloo, April 2016.

  19. Poissonian renormalizations, exponentials, and power laws.

    PubMed

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive "renormalization study" of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to "white noise" and to "1/f noise." Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  20. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  1. The non-equilibrium allele frequency spectrum in a Poisson random field framework.

    PubMed

    Kaj, Ingemar; Mugal, Carina F

    2016-10-01

    In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Poisson mixture model for measurements using counting.

    PubMed

    Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz

    2010-03-01

    Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.

  3. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    PubMed

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  4. Applying Flammability Limit Probabilities and the Normoxic Upward Limiting Pressure Concept to NASA STD-6001 Test 1

    NASA Technical Reports Server (NTRS)

    Olson, Sandra L.; Beeson, Harold; Fernandez-Pello, A. Carlos

    2014-01-01

    Repeated Test 1 extinction tests near the upward flammability limit are expected to follow a Poisson process trend. This Poisson process trend suggests that rather than define a ULOI and MOC (which requires two limits to be determined), it might be better to define a single upward limit as being where 1/e (where e (approx. equal to 2.7183) is the characteristic time of the normalized Poisson process) of the materials burn, or, rounding, where approximately 1/3 of the samples fail the test (and burn). Recognizing that spacecraft atmospheres will not bound the entire oxygen-pressure parameter space, but actually lie along the normoxic atmosphere control band, we can focus the materials flammability testing along this normoxic band. A Normoxic Upward Limiting Pressure (NULP) is defined that determines the minimum safe total pressure for a material within the constant partial pressure control band. Then, increasing this pressure limit by a factor of safety, we can define the material as being safe to use at the NULP + SF (where SF is on the order of 10 kilopascal, based on existing flammability data). It is recommended that the thickest material to be tested with the current Test 1 igniter should be 3 mm thick (1/8 inches) to avoid the problem of differentiating between an ignition limit and a true flammability limit.

  5. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  6. Time distributions of solar energetic particle events: Are SEPEs really random?

    NASA Astrophysics Data System (ADS)

    Jiggens, P. T. A.; Gabriel, S. B.

    2009-10-01

    Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.

  7. The contribution of simple random sampling to observed variations in faecal egg counts.

    PubMed

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Relation Between Firing Statistics of Spiking Neuron with Delayed Fast Inhibitory Feedback and Without Feedback

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander; Shchur, Olha

    We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived. Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.

  9. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    PubMed

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.

  10. Poissonian renormalizations, exponentials, and power laws

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive “renormalization study” of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to “white noise” and to “1/f noise.” Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  11. A bayesian analysis for identifying DNA copy number variations using a compound poisson process.

    PubMed

    Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen

    2010-01-01

    To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.

  12. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  14. Constructions and classifications of projective Poisson varieties.

    PubMed

    Pym, Brent

    2018-01-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  15. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  16. Constructions and classifications of projective Poisson varieties

    NASA Astrophysics Data System (ADS)

    Pym, Brent

    2018-03-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  17. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    PubMed

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (p< 0.0001), showing the MUNE a better correlation with age than CMAP amplitude ( 0.5002 and 0.4142, respectively p< 0.0001). Statistical MUNE method is an important way for the assessment to the phisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  18. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia.

    PubMed

    Park, Taeyoung; Krafty, Robert T; Sánchez, Alvaro I

    2012-07-27

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.

  19. A Stabilized Finite Element Method for Modified Poisson-Nernst-Planck Equations to Determine Ion Flow Through a Nanopore

    PubMed Central

    Chaudhry, Jehanzeb Hameed; Comer, Jeffrey; Aksimentiev, Aleksei; Olson, Luke N.

    2013-01-01

    The conventional Poisson-Nernst-Planck equations do not account for the finite size of ions explicitly. This leads to solutions featuring unrealistically high ionic concentrations in the regions subject to external potentials, in particular, near highly charged surfaces. A modified form of the Poisson-Nernst-Planck equations accounts for steric effects and results in solutions with finite ion concentrations. Here, we evaluate numerical methods for solving the modified Poisson-Nernst-Planck equations by modeling electric field-driven transport of ions through a nanopore. We describe a novel, robust finite element solver that combines the applications of the Newton's method to the nonlinear Galerkin form of the equations, augmented with stabilization terms to appropriately handle the drift-diffusion processes. To make direct comparison with particle-based simulations possible, our method is specifically designed to produce solutions under periodic boundary conditions and to conserve the number of ions in the solution domain. We test our finite element solver on a set of challenging numerical experiments that include calculations of the ion distribution in a volume confined between two charged plates, calculations of the ionic current though a nanopore subject to an external electric field, and modeling the effect of a DNA molecule on the ion concentration and nanopore current. PMID:24363784

  20. A modified Poisson-Boltzmann equation applied to protein adsorption.

    PubMed

    Gama, Marlon de Souza; Santos, Mirella Simões; Lima, Eduardo Rocha de Almeida; Tavares, Frederico Wanderley; Barreto, Amaro Gomes Barreto

    2018-01-05

    Ion-exchange chromatography has been widely used as a standard process in purification and analysis of protein, based on the electrostatic interaction between the protein and the stationary phase. Through the years, several approaches are used to improve the thermodynamic description of colloidal particle-surface interaction systems, however there are still a lot of gaps specifically when describing the behavior of protein adsorption. Here, we present an improved methodology for predicting the adsorption equilibrium constant by solving the modified Poisson-Boltzmann (PB) equation in bispherical coordinates. By including dispersion interactions between ions and protein, and between ions and surface, the modified PB equation used can describe the Hofmeister effects. We solve the modified Poisson-Boltzmann equation to calculate the protein-surface potential of mean force, treated as spherical colloid-plate system, as a function of process variables. From the potential of mean force, the Henry constants of adsorption, for different proteins and surfaces, are calculated as a function of pH, salt concentration, salt type, and temperature. The obtained Henry constants are compared with experimental data for several isotherms showing excellent agreement. We have also performed a sensitivity analysis to verify the behavior of different kind of salts and the Hofmeister effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Impact Damage on a Thin Glass Plate with a Thin Polycarbonate Backing

    DTIC Science & Technology

    2013-07-13

    fixed and equals 0.25 in 3D (close to the soda-lime glass Poisson ratio of 0.22), and 1/3 in 2D, since the assumption is that material points interact...only through a pair-potential. The Poisson ratio limitation is removed in the state-based formulation of peridynamics (see Ref. [26]), however, here...we use the bond-based for simplicity. We note that, in dynamic fracture problems of the type considered in this work, the Poisson ratio value does not

  2. The origin of bursts and heavy tails in human dynamics.

    PubMed

    Barabási, Albert-László

    2005-05-12

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.

  3. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  4. Theory of earthquakes interevent times applied to financial markets

    NASA Astrophysics Data System (ADS)

    Jagielski, Maciej; Kutner, Ryszard; Sornette, Didier

    2017-10-01

    We analyze the probability density function (PDF) of waiting times between financial loss exceedances. The empirical PDFs are fitted with the self-excited Hawkes conditional Poisson process with a long power law memory kernel. The Hawkes process is the simplest extension of the Poisson process that takes into account how past events influence the occurrence of future events. By analyzing the empirical data for 15 different financial assets, we show that the formalism of the Hawkes process used for earthquakes can successfully model the PDF of interevent times between successive market losses.

  5. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  6. Seasonally adjusted birth frequencies follow the Poisson distribution.

    PubMed

    Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A

    2015-12-15

    Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p < 0.01). The fundamental model with year and month as explanatory variables is significantly improved (p < 0.001) by adding day of the week as an explanatory variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.

  7. Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, Y.

    2017-12-01

    Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.

  8. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  9. NON-HOMOGENEOUS POISSON PROCESS MODEL FOR GENETIC CROSSOVER INTERFERENCE.

    PubMed

    Leu, Szu-Yun; Sen, Pranab K

    2014-01-01

    The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models.

  10. Irreversible thermodynamics of Poisson processes with reaction.

    PubMed

    Méndez, V; Fort, J

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  11. A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market

    ERIC Educational Resources Information Center

    Dunlap, Mickey; Studstill, Sharyn

    2014-01-01

    The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.

  12. How to deal with the Poisson-gamma model to forecast patients' recruitment in clinical trials when there are pauses in recruitment dynamic?

    PubMed

    Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas

    2017-03-01

    Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.

  13. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  14. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  15. An Extension of SIC Predictions to the Wiener Coactive Model

    PubMed Central

    Houpt, Joseph W.; Townsend, James T.

    2011-01-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333

  16. An Extension of SIC Predictions to the Wiener Coactive Model.

    PubMed

    Houpt, Joseph W; Townsend, James T

    2011-06-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.

  17. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    PubMed

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  18. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0

  19. Noncommutative gauge theory for Poisson manifolds

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Wess, Julius

    2000-09-01

    A noncommutative gauge theory is associated to every Abelian gauge theory on a Poisson manifold. The semi-classical and full quantum version of the map from the ordinary gauge theory to the noncommutative gauge theory (Seiberg-Witten map) is given explicitly to all orders for any Poisson manifold in the Abelian case. In the quantum case the construction is based on Kontsevich's formality theorem.

  20. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

    2017-04-01

    We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

  1. Systematic design of 3D auxetic lattice materials with programmable Poisson's ratio for finite strains

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen

    2018-05-01

    This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.

  2. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  3. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  4. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  5. Transport of Multivalent Electrolyte Mixtures in Micro- and Nanochannels

    DTIC Science & Technology

    2013-11-08

    equations for this process are the unsteady Navier-Stokes equations along with continuity and the Poisson- Nernst -Planck system for the electro- static part...about five times the Debye screening length D (the 1/e lengthscale for the potential from the solution of the linearized Poisson- Boltzmann equation

  6. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    NASA Astrophysics Data System (ADS)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  7. A two-phase Poisson process model and its application to analysis of cancer mortality among A-bomb survivors.

    PubMed

    Ohtaki, Megu; Tonda, Tetsuji; Aihara, Kazuyuki

    2015-10-01

    We consider a two-phase Poisson process model where only early successive transitions are assumed to be sensitive to exposure. In the case where intensity transitions are low, we derive analytically an approximate formula for the distribution of time to event for the excess hazard ratio (EHR) due to a single point exposure. The formula for EHR is a polynomial in exposure dose. Since the formula for EHR contains no unknown parameters except for the number of total stages, number of exposure-sensitive stages, and a coefficient of exposure effect, it is applicable easily under a variety of situations where there exists a possible latency time from a single point exposure to occurrence of event. Based on the multistage hypothesis of cancer, we formulate a radiation carcinogenesis model in which only some early consecutive stages of the process are sensitive to exposure, whereas later stages are not affected. An illustrative analysis using the proposed model is given for cancer mortality among A-bomb survivors. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  10. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  11. The distribution of catchment coverage by stationary rainstorms

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1984-01-01

    The occurrence of wetted rainstorm area within a catchment is modeled as a Poisson arrival process in which each storm is composed of stationary, nonoverlapping, independent random cell clusters whose centers are Poisson-distributed in space and whose areas are fractals. The two Poisson parameters and hence the first two moments of the wetted fraction are derived in terms of catchment average characteristics of the (observable) station precipitation. The model is used to estimate spatial properties of tropical air mass thunderstorms on six tropical catchments in the Sudan.

  12. Complex analysis of neuronal spike trains of deep brain nuclei in patients with Parkinson's disease.

    PubMed

    Chan, Hsiao-Lung; Lin, Ming-An; Lee, Shih-Tseng; Tsai, Yu-Tai; Chao, Pei-Kuang; Wu, Tony

    2010-04-05

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) has been used to alleviate symptoms of Parkinson's disease. During image-guided stereotactic surgery, signals from microelectrode recordings are used to distinguish the STN from adjacent areas, particularly from the substantia nigra pars reticulata (SNr). Neuronal firing patterns based on interspike intervals (ISI) are commonly used. In the present study, arrival time-based measures, including Lempel-Ziv complexity and deviation-from-Poisson index were employed. Our results revealed significant differences in the arrival time-based measures among non-motor STN, motor STN and SNr and better discrimination than the ISI-based measures. The larger deviations from the Poisson process in the SNr implied less complex dynamics of neuronal discharges. If spike classification was not used, the arrival time-based measures still produced statistical differences among STN subdivisions and SNr, but the ISI-based measures only showed significant differences between motor and non-motor STN. Arrival time-based measures are less affected by spike misclassifications, and may be used as an adjunct for the identification of the STN during microelectrode targeting. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Generalized Poisson-Kac Processes: Basic Properties and Implications in Extended Thermodynamics and Transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2016-04-01

    We introduce a new class of stochastic processes in Rn,{{{mathbb R}}^n}, referred to as generalized Poisson-Kac (GPK) processes, that generalizes the Poisson-Kac telegrapher's random motion in higher dimensions. These stochastic processes possess finite propagation velocity, almost everywhere smooth trajectories, and converge in the Kac limit to Brownian motion. GPK processes are defined by coupling the selection of a bounded velocity vector from a family of N distinct ones with a Markovian dynamics controlling probabilistically this selection. This model can be used as a probabilistic tool for a stochastically consistent formulation of extended thermodynamic theories far from equilibrium.

  14. Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?

    PubMed

    MacKenzie, Donald W

    2013-01-01

    Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.

  15. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  16. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  17. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  18. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  20. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  1. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  2. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  3. A numerical investigation into the ability of the Poisson PDE to extract the mass-density from land-based gravity data: A case study of salt diapirs in the north coast of the Persian Gulf

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza

    2017-08-01

    This paper is counted as a numerical investigation into the capability of Poisson's Partial Differential Equation (PDE) at Earth's surface to extract the near-surface mass-density from land-based gravity data. For this purpose, first it focuses on approximating the gradient tensor of Earth's gravitational potential by means of land-based gravity data. Then, based on the concepts of both the gradient tensor and Poisson's PDE at the Earth's surface, certain formulae are proposed for the mass-density determination. Furthermore, this paper shows how the generalized Tikhonov regularization strategy can be used for enhancing the efficiency of the proposed approach. Finally, in a real case study, the formulae are applied to 6350 gravity stations located within a part of the north coast of the Persian Gulf. The case study numerically indicates that the proposed formulae, provided by Poisson's PDE, has the ability to convert land-based gravity data into the terrain mass-density which has been used for depicting areas of salt diapirs in the region of the case study.

  4. The exponential-Poisson model for recurrent event data: an application to a set of data on malaria in Brazil.

    PubMed

    Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F

    2015-03-01

    In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  6. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process

    NASA Astrophysics Data System (ADS)

    Noviyanti, Lienda

    2015-12-01

    All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.

  8. Generalized master equations for non-Poisson dynamics on networks.

    PubMed

    Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud

    2012-10-01

    The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.

  9. Generalized master equations for non-Poisson dynamics on networks

    NASA Astrophysics Data System (ADS)

    Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud

    2012-10-01

    The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.

  10. Weber's law implies neural discharge more regular than a Poisson process.

    PubMed

    Kang, Jing; Wu, Jianhua; Smerieri, Anteo; Feng, Jianfeng

    2010-03-01

    Weber's law is one of the basic laws in psychophysics, but the link between this psychophysical behavior and the neuronal response has not yet been established. In this paper, we carried out an analysis on the spike train statistics when Weber's law holds, and found that the efferent spike train of a single neuron is less variable than a Poisson process. For population neurons, Weber's law is satisfied only when the population size is small (< 10 neurons). However, if the population neurons share a weak correlation in their discharges and individual neuronal spike train is more regular than a Poisson process, Weber's law is true without any restriction on the population size. Biased competition attractor network also demonstrates that the coefficient of variation of interspike interval in the winning pool should be less than one for the validity of Weber's law. Our work links Weber's law with neural firing property quantitatively, shedding light on the relation between psychophysical behavior and neuronal responses.

  11. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    PubMed

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  12. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  13. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  14. Symplectic discretization for spectral element solution of Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Zhao, Yanmin; Dai, Guidong; Tang, Yifa; Liu, Qinghuo

    2009-08-01

    Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.

  15. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  16. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  17. Brain, music, and non-Poisson renewal processes

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo

    2007-06-01

    In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.

  18. Analysis of single-molecule fluorescence spectroscopic data with a Markov-modulated Poisson process.

    PubMed

    Jäger, Mark; Kiel, Alexander; Herten, Dirk-Peter; Hamprecht, Fred A

    2009-10-05

    We present a photon-by-photon analysis framework for the evaluation of data from single-molecule fluorescence spectroscopy (SMFS) experiments using a Markov-modulated Poisson process (MMPP). A MMPP combines a discrete (and hidden) Markov process with an additional Poisson process reflecting the observation of individual photons. The algorithmic framework is used to automatically analyze the dynamics of the complex formation and dissociation of Cu2+ ions with the bidentate ligand 2,2'-bipyridine-4,4'dicarboxylic acid in aqueous media. The process of association and dissociation of Cu2+ ions is monitored with SMFS. The dcbpy-DNA conjugate can exist in two or more distinct states which influence the photon emission rates. The advantage of a photon-by-photon analysis is that no information is lost in preprocessing steps. Different model complexities are investigated in order to best describe the recorded data and to determine transition rates on a photon-by-photon basis. The main strength of the method is that it allows to detect intermittent phenomena which are masked by binning and that are difficult to find using correlation techniques when they are short-lived.

  19. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  20. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.

  1. How does variance in fertility change over the demographic transition?

    PubMed Central

    Hruschka, Daniel J.; Burger, Oskar

    2016-01-01

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082

  2. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  3. On Subset Selection Procedures for Poisson Processes and Some Applications to the Binomial and Multinomial Problems

    DTIC Science & Technology

    1976-07-01

    PURDUE UNIVERSITY DEPARTMENT OF STATISTICS DIVISION OF MATHEMATICAL SCIENCES ON SUBSET SELECTION PROCEDURES FOR POISSON PROCESSES AND SOME...Mathematical Sciences Mimeograph Series #457, July 1976 This research was supported by the Office of Naval Research under Contract NOOO14-75-C-0455 at Purdue...11 CON PC-111 riFIC-F ,A.F ANO ADDPFS Office of INaval ResearchJu#07 Washington, DC07 36AE 14~~~ rjCr; NF A ’ , A FAA D F 6 - I S it 9 i 1, - ,1 I

  4. Crystalline nucleation in undercooled liquids: a Bayesian data-analysis approach for a nonhomogeneous Poisson process.

    PubMed

    Filipponi, A; Di Cicco, A; Principi, E

    2012-12-01

    A Bayesian data-analysis approach to data sets of maximum undercooling temperatures recorded in repeated melting-cooling cycles of high-purity samples is proposed. The crystallization phenomenon is described in terms of a nonhomogeneous Poisson process driven by a temperature-dependent sample nucleation rate J(T). The method was extensively tested by computer simulations and applied to real data for undercooled liquid Ge. It proved to be particularly useful in the case of scarce data sets where the usage of binned data would degrade the available experimental information.

  5. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Adiabatic elimination for systems with inertia driven by compound Poisson colored noise.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2014-02-01

    We consider the dynamics of systems driven by compound Poisson colored noise in the presence of inertia. We study the limit when the frictional relaxation time and the noise autocorrelation time both tend to zero. We show that the Itô and Marcus stochastic calculuses naturally arise depending on these two time scales, and an extra intermediate type occurs when the two time scales are comparable. This leads to three different limiting regimes which are supported by numerical simulations. Furthermore, we establish that when the resulting compound Poisson process tends to the Wiener process in the frequent jump limit the Itô and Marcus calculuses, respectively, tend to the classical Itô and Stratonovich calculuses for Gaussian white noise, and the crossover type calculus tends to a crossover between the Itô and Stratonovich calculuses. Our results would be very helpful for understanding relevant experiments when jump type noise is involved.

  7. Measurement of Poisson's ratio of nonmetallic materials by laser holographic interferometry

    NASA Astrophysics Data System (ADS)

    Zhu, Jian T.

    1991-12-01

    By means of the off-axis collimated plane wave coherent light arrangement and a loading device by pure bending, Poisson's ratio values of CFRP (carbon fiber-reinforced plactics plates, lay-up 0 degree(s), 90 degree(s)), GFRP (glass fiber-reinforced plactics plates, radial direction) and PMMA (polymethyl methacrylate, x, y direction) have been measured. In virtue of this study, the ministry standard for the Ministry of Aeronautical Industry (Testing method for the measurement of Poisson's ratio of non-metallic by laser holographic interferometry) has been published. The measurement process is fast and simple. The measuring results are reliable and accurate.

  8. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  9. Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishizawa, T., E-mail: nishizawa@wisc.edu; Nornberg, M. D.; Den Hartog, D. J.

    2016-11-15

    The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier’s cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.

  10. Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements

    NASA Astrophysics Data System (ADS)

    Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Craig, D.

    2016-11-01

    The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier's cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.

  11. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  12. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  13. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    PubMed

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  14. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  15. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  16. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  17. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  18. Complex wet-environments in electronic-structure calculations

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.

  19. A study of trends and techniques for space base electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    The use of dry processing and alternate dielectrics for processing wafers is reported. A two dimensional modeling program was written for the simulation of short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide-silicon interface. In solving current continuity equation, the program does not converge. However, solving the two dimensional Poisson equation for the potential distribution was achieved. The status of other 2D MOSFET simulation programs are summarized.

  20. Poisson's spot and Gouy phase

    NASA Astrophysics Data System (ADS)

    da Paz, I. G.; Soldati, Rodolfo; Cabral, L. A.; de Oliveira, J. G. G.; Sampaio, Marcos

    2016-12-01

    Recently there have been experimental results on Poisson spot matter-wave interferometry followed by theoretical models describing the relative importance of the wave and particle behaviors for the phenomenon. We propose an analytical theoretical model for Poisson's spot with matter waves based on the Babinet principle, in which we use the results for free propagation and single-slit diffraction. We take into account effects of loss of coherence and finite detection area using the propagator for a quantum particle interacting with an environment. We observe that the matter-wave Gouy phase plays a role in the existence of the central peak and thus corroborates the predominantly wavelike character of the Poisson's spot. Our model shows remarkable agreement with the experimental data for deuterium (D2) molecules.

  1. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  2. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  3. Photon statistics in scintillation crystals

    NASA Astrophysics Data System (ADS)

    Bora, Vaibhav Joga Singh

    Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are eld-tested, relatively inexpensive, and have good detection eciency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often dened by the Fano factor. The Fano factor of an integer-valued random process is dened as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI2:Eu, YAP:Ce and CsI:Na. We also found an empirical relationship between the Fano factor and the covariance as a function of time between two detectors looking at the same scintillation pulse. This empirical model was used to estimate the Fano factor of LaBr3:Ce and YAP:Ce using the experimentally measured timing-covariance. The estimates of the Fano factor from the time-covariance results were consistent with the estimates of the correlation between the integral signals. We found scintillation light from some scintillators to be sub-Poisson. For the same mean number of total scintillation photons, sub-Poisson light has lower noise. We then conducted a simulation study to investigate whether this low-noise sub-Poisson light can be used to improve spatial resolution. We calculated the Cramer-Rao bound for dierent detector geometries, position of interactions and Fano factors. The Cramer-Rao calculations were veried by generating simulated data and estimating the variance of the maximum likelihood estimator. We found that the Fano factor has no impact on the spatial resolution in gamma-ray imaging systems.

  4. Examining the spatially non-stationary associations between the second demographic transition and infant mortality: A Poisson GWR approach.

    PubMed

    Yang, Tse-Chuan; Shoff, Carla; Matthews, Stephen A

    2013-01-01

    Based on ecological studies, second demographic transition (SDT) theorists concluded that some areas in the US were in vanguard of the SDT compared to others, implying spatial nonstationarity may be inherent in the SDT process. Linking the SDT to the infant mortality literature, we sought out to answer two related questions: Are the main components of the SDT, specifically marriage postponement, cohabitation, and divorce, associated with infant mortality? If yes, do these associations vary across the US? We applied global Poisson and geographically weighted Poisson regression (GWPR) models, a place-specific analytic approach, to county-level data in the contiguous US. After accounting for the racial/ethnic and socioeconomic compositions of counties and prenatal care utilization, we found (1) marriage postponement was negatively related to infant mortality in the southwestern states, but positively associated with infant mortality in parts of Indiana, Kentucky, and Tennessee, (2) cohabitation rates were positively related to infant mortality, and this relationship was stronger in California, coastal Virginia, and the Carolinas than other areas, and (3) a positive association between divorce rates and infant mortality in southwestern and northeastern areas of the US. These spatial patterns suggested that the associations between the SDT and infant mortality were stronger in the areas in vanguard of the SDT than in others. The comparison between global Poisson and GWPR results indicated that a place-specific spatial analysis not only fit the data better, but also provided insights into understanding the non-stationarity of the associations between the SDT and infant mortality.

  5. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  6. Investigating on the Differences between Triggered and Background Seismicity in Italy and Southern California.

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2017-12-01

    Earthquake occurrence may be approximated by a multidimensional Poisson clustering process, where each point of the Poisson process is replaced by a cluster of points, the latter corresponding to the well-known aftershock sequence (triggered events). Earthquake clusters and their parents are assumed to occur according to a Poisson process at a constant temporal rate proportional to the tectonic strain rate, while events within a cluster are modeled as generations of dependent events reproduced by a branching process. Although the occurrence of such space-time clusters is a general feature in different tectonic settings, seismic sequences seem to have marked differences from region to region: one example, among many others, is that seismic sequences of moderate magnitude in Italian Apennines seem to last longer than similar seismic sequences in California. In this work we investigate on the existence of possible differences in the earthquake clustering process in these two areas. At first, we separate the triggered and background components of seismicity in the Italian and Southern California seismic catalog. Then we study the space-time domain of the triggered earthquakes with the aim to identify possible variations in the triggering properties across the two regions. In the second part of the work we focus our attention on the characteristics of the background seismicity in both seismic catalogs. The assumption of time stationarity of the background seismicity (which includes both cluster parents and isolated events) is still under debate. Some authors suggest that the independent component of seismicity could undergo transient perturbations at various time scales due to different physical mechanisms, such as, for example, viscoelastic relaxation, presence of fluids, non-stationary plate motion, etc, whose impact may depend on the tectonic setting. Here we test if the background seismicity in the two regions can be satisfactorily described by the time-homogeneous Poisson process, and, in case, we characterize quantitatively possible discrepancies with this reference process, and the differences between the two regions.

  7. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  8. Structural interactions in ionic liquids linked to higher-order Poisson-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Blossey, R.; Maggs, A. C.; Podgornik, R.

    2017-06-01

    We present a derivation of generalized Poisson-Boltzmann equations starting from classical theories of binary fluid mixtures, employing an approach based on the Legendre transform as recently applied to the case of local descriptions of the fluid free energy. Under specific symmetry assumptions, and in the linearized regime, the Poisson-Boltzmann equation reduces to a phenomenological equation introduced by Bazant et al. [Phys. Rev. Lett. 106, 046102 (2011)], 10.1103/PhysRevLett.106.046102, whereby the structuring near the surface is determined by bulk coefficients.

  9. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  10. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface

    NASA Astrophysics Data System (ADS)

    Coons, Marc P.; Herbert, John M.

    2018-06-01

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  11. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface.

    PubMed

    Coons, Marc P; Herbert, John M

    2018-06-14

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ε. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ε(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F - (aq), Cl - (aq), neat liquid water, and the hydrated electron, although errors for Li + (aq) and Na + (aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  12. No control genes required: Bayesian analysis of qRT-PCR data.

    PubMed

    Matz, Mikhail V; Wright, Rachel M; Scott, James G

    2013-01-01

    Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.

  13. Waiting-time distributions of magnetic discontinuities: clustering or Poisson process?

    PubMed

    Greco, A; Matthaeus, W H; Servidio, S; Dmitruk, P

    2009-10-01

    Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.

  14. The Poisson Random Process. Applications of Probability Theory to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 340.

    ERIC Educational Resources Information Center

    Wilde, Carroll O.

    The Poisson probability distribution is seen to provide a mathematical model from which useful information can be obtained in practical applications. The distribution and some situations to which it applies are studied, and ways to find answers to practical questions are noted. The unit includes exercises and a model exam, and provides answers to…

  15. Waiting-time distributions of magnetic discontinuities: Clustering or Poisson process?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greco, A.; Matthaeus, W. H.; Servidio, S.

    2009-10-15

    Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.

  16. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  17. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

  18. NEW APPLICATIONS IN THE INVERSION OF ACOUSTIC FULL WAVEFORM LOGS - RELATING MODE EXCITATION TO LITHOLOGY.

    USGS Publications Warehouse

    Paillet, Frederick L.; Cheng, C.H.; Meredith, J.A.

    1987-01-01

    Existing techniques for the quantitative interpretation of waveform data have been based on one of two fundamental approaches: (1) simultaneous identification of compressional and shear velocities; and (2) least-squares minimization of the difference between experimental waveforms and synthetic seismograms. Techniques based on the first approach do not always work, and those based on the second seem too numerically cumbersome for routine application during data processing. An alternative approach is tested here, in which synthetic waveforms are used to predict relative mode excitation in the composite waveform. Synthetic waveforms are generated for a series of lithologies ranging from hard, crystalline rocks (Vp equals 6. 0 km/sec. and Poisson's ratio equals 0. 20) to soft, argillaceous sediments (Vp equals 1. 8 km/sec. and Poisson's ratio equals 0. 40). The series of waveforms illustrates a continuous change within this range of rock properties. Mode energy within characteristic velocity windows is computed for each of the modes in the set of synthetic waveforms. The results indicate that there is a consistent variation in mode excitation in lithology space that can be used to construct a unique relationship between relative mode excitation and lithology.

  19. A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.

    PubMed

    Zhao, Lei; Mi, Dong; Sun, Yeqing

    2017-05-07

    The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  1. Solving the Fluid Pressure Poisson Equation Using Multigrid-Evaluation and Improvements.

    PubMed

    Dick, Christian; Rogowsky, Marcus; Westermann, Rudiger

    2016-11-01

    In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.

  2. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  3. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  4. Clinical characterization of 2D pressure field in human left ventricles

    NASA Astrophysics Data System (ADS)

    Borja, Maria; Rossini, Lorenzo; Martinez-Legazpi, Pablo; Benito, Yolanda; Alhama, Marta; Yotti, Raquel; Perez Del Villar, Candelas; Gonzalez-Mansilla, Ana; Barrio, Alicia; Fernandez-Aviles, Francisco; Bermejo, Javier; Khan, Andrew; Del Alamo, Juan Carlos

    2014-11-01

    The evaluation of left ventricle (LV) function in the clinical setting remains a challenge. Pressure gradient is a reliable and reproducible indicator of the LV function. We obtain 2D relative pressure field in the LV using in-vivo measurements obtained by processing Doppler-echocardiography images of healthy and dilated hearts. Exploiting mass conservation, we solve the Poisson pressure equation (PPE) dropping the time derivatives and viscous terms. The flow acceleration appears only in the boundary conditions, making our method weakly sensible to the time resolution of in-vivo acquisitions. To ensure continuity with respect to the discrete operator and grid used, a potential flow correction is applied beforehand, which gives another Poisson equation. The new incompressible velocity field ensures that the compatibility equation for the PPE is satisfied. Both Poisson equations are efficiently solved on a Cartesian grid using a multi-grid method and immersed boundary for the LV wall. The whole process is computationally inexpensive and could play a diagnostic role in the clinical assessment of LV function.

  5. Beating the odds: The poisson distribution of all input cells during limiting dilution grossly underestimates whether a cell line is clonally-derived or not.

    PubMed

    Zhou, Yizhou; Shaw, David; Lam, Cynthia; Tsukuda, Joni; Yim, Mandy; Tang, Danming; Louie, Salina; Laird, Michael W; Snedecor, Brad; Misaghi, Shahram

    2017-09-23

    Establishing that a cell line was derived from a single cell progenitor and defined as clonally-derived for the production of clinical and commercial therapeutic protein drugs has been the subject of increased emphasis in cell line development (CLD). Several regulatory agencies have expressed that the prospective probability of clonality for CHO cell lines is assumed to follow the Poisson distribution based on the input cell count. The probability of obtaining monoclonal progenitors based on the Poisson distribution of all cells suggests that one round of limiting dilution may not be sufficient to assure the resulting cell lines are clonally-derived. We experimentally analyzed clonal derivatives originating from single cell cloning (SCC) via one round of limiting dilution, following our standard legacy cell line development practice. Two cell populations with stably integrated DNA spacers were mixed and subjected to SCC via limiting dilution. Cells were cultured in the presence of selection agent, screened, and ranked based on product titer. Post-SCC, the growing cell lines were screened by PCR analysis for the presence of identifying spacers. We observed that the percentage of nonclonal populations was below 9%, which is considerably lower than the determined probability based on the Poisson distribution of all cells. These results were further confirmed using fluorescence imaging of clonal derivatives originating from SCC via limiting dilution of mixed cell populations expressing GFP or RFP. Our results demonstrate that in the presence of selection agent, the Poisson distribution of all cells clearly underestimates the probability of obtaining clonally-derived cell lines. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 2017. © 2017 American Institute of Chemical Engineers.

  6. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  7. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  8. Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Ferrari, E.; de Luca, D. L.

    2011-06-01

    A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.

  9. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.

  10. A cascade model of information processing and encoding for retinal prosthesis.

    PubMed

    Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian

    2016-04-01

    Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.

  11. Joseph eleve des poissons au Gabon. Guide pour l'elevage des poissons (Joseph Raises Fish in Gabon. Guide for Raising Fish).

    ERIC Educational Resources Information Center

    Needham, Scott

    A guide, in French, to raising fish for food and profit is designed to instruct and encourage Gabonese natives to establish family fisheries. It describes and illustrates in story form the process used to plan the fishery, clear the land, seek help from an agricultural agent, create a dam, make compost, plan and build the pond, feed the fish,…

  12. An investigation of stress wave propagation in a shear deformable nanobeam based on modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Akbarzadeh Khorshidi, Majid; Shariati, Mahmoud

    2016-04-01

    This paper presents a new investigation for propagation of stress wave in a nanobeam based on modified couple stress theory. Using Euler-Bernoulli beam theory, Timoshenko beam theory, and Reddy beam theory, the effect of shear deformation is investigated. This nonclassical model contains a material length scale parameter to capture the size effect and the Poisson effect is incorporated in the current model. Governing equations of motion are obtained by Hamilton's principle and solved explicitly. This solution leads to obtain two phase velocities for shear deformable beams in different directions. Effects of shear deformation, material length scale parameter, and Poisson's ratio on the behavior of these phase velocities are investigated and discussed. The results also show a dual behavior for phase velocities against Poisson's ratio.

  13. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  14. Marginal Stability of Ion-Acoustic Waves in a Weakly Collisional Two-Temperature Plasma without a Current.

    DTIC Science & Technology

    1987-08-06

    ABSTRACT (Continue on reverse if necessary and identify by block number) The linearized Balescu -Lenard-Poisson equations are solved in the weakly...free plasma is . unresolved. The purpose of this report is to present a resolution based upon the Balescu -Lenard-Poisson equations. The Balescu -Lenard...acoustic waves become marginally stable. Gur re- sults are based on the closed form solution for the dielectric function for the line- arized Balescu -Lenard

  15. A special case of the Poisson PDE formulated for Earth's surface and its capability to approximate the terrain mass density employing land-based gravity data, a case study in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr

    2016-12-01

    This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.

  16. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  18. Toward negative Poisson's ratio composites: Investigation of the auxetic behavior of fibrous networks

    NASA Astrophysics Data System (ADS)

    Tatlier, Mehmet Seha

    Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.

  19. Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data

    NASA Astrophysics Data System (ADS)

    Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth

    2012-03-01

    The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref

  20. Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models.

    PubMed

    Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P

    2012-01-01

    In a phase III multi-center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B-38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Limitations of Poisson statistics in describing radioactive decay.

    PubMed

    Sitek, Arkadiusz; Celler, Anna M

    2015-12-01

    The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    PubMed

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  3. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method

    PubMed Central

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929

  4. De Rham-Hodge decomposition and vanishing of harmonic forms by derivation operators on the Poisson space

    NASA Astrophysics Data System (ADS)

    Privault, Nicolas

    2016-05-01

    We construct differential forms of all orders and a covariant derivative together with its adjoint on the probability space of a standard Poisson process, using derivation operators. In this framewok we derive a de Rham-Hodge-Kodaira decomposition as well as Weitzenböck and Clark-Ocone formulas for random differential forms. As in the Wiener space setting, this construction provides two distinct approaches to the vanishing of harmonic differential forms.

  5. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  6. The Poisson model limits in NBA basketball: Complexity in team sports

    NASA Astrophysics Data System (ADS)

    Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

    2016-12-01

    Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

  7. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals

    NASA Astrophysics Data System (ADS)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-03-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  8. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    PubMed

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  9. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  10. Beyond single-stream with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael

    2016-10-01

    We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.

  11. FluBreaks: early epidemic detection from Google flu trends.

    PubMed

    Pervaiz, Fahad; Pervaiz, Mansoor; Abdur Rehman, Nabeel; Saif, Umar

    2012-10-04

    The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases. To evaluate whether these trends can be used as a basis for an early warning system for epidemics. We present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data. Across our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size. Regions with larger populations (regions 4 and 6) had higher values of EA than region 10 (which had the smallest population) for negative binomial- and Poisson-based algorithms. The difference was 12.5% and 13.5% on average in negative binomial- and Poisson-based algorithms, respectively. We present the first detailed comparative analysis of popular early epidemic detection algorithms on Google Flu Trends data. We note that realizing this opportunity requires moving beyond the cumulative sum and historical limits method-based normal distribution approaches, traditionally employed by the CDC, to negative binomial- and Poisson-based algorithms to deal with potentially noisy search query data from regions with varying population and Internet penetrations. Based on our work, we have developed FluBreaks, an early warning system for flu epidemics using Google Flu Trends.

  12. A Fock space representation for the quantum Lorentz gas

    NASA Astrophysics Data System (ADS)

    Maassen, H.; Tip, A.

    1995-02-01

    A Fock space representation is given for the quantum Lorentz gas, i.e., for random Schrödinger operators of the form H(ω)=p2+Vω=p2+∑ φ(x-xj(ω)), acting in H=L2(Rd), with Poisson distributed xjs. An operator H is defined in K=H⊗P=H⊗L2(Ω,P(dω))=L2(Ω,P(dω);H) by the action of H(ω) on its fibers in a direct integral decomposition. The stationarity of the Poisson process allows a unitarily equivalent description in terms of a new family {H(k)||k∈Rd}, where each H(k) acts in P [A. Tip, J. Math. Phys. 35, 113 (1994)]. The space P is then unitarily mapped upon the symmetric Fock space over L2(Rd,ρdx), with ρ the intensity of the Poisson process (the average number of points xj per unit volume; the scatterer density), and the equivalent of H(k) is determined. Averages now become vacuum expectation values and a further unitary transformation (removing ρ in ρdx) is made which leaves the former invariant. The resulting operator HF(k) has an interesting structure: On the nth Fock layer we encounter a single particle moving in the field of n scatterers and the randomness now appears in the coefficient √ρ in a coupling term connecting neighboring Fock layers. We also give a simple direct self-adjointness proof for HF(k), based upon Nelson's commutator theorem. Restriction to a finite number of layers (a kind of low scatterer density approximation) still gives nontrivial results, as is demonstrated by considering an example.

  13. Human dynamics scaling characteristics for aerial inbound logistics operation

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Guo, Jin-Li

    2010-05-01

    In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.

  14. Charged Substrate and Product Together Contribute Like a Nonreactive Species to the Overall Electrostatic Steering in Diffusion-Reaction Processes.

    PubMed

    Xu, Jingjie; Xie, Yan; Lu, Benzhuo; Zhang, Linbo

    2016-08-25

    The Debye-Hückel limiting law is used to study the binding kinetics of substrate-enzyme system as well as to estimate the reaction rate of a electrostatically steered diffusion-controlled reaction process. It is based on a linearized Poisson-Boltzmann model and known for its accurate predictions in dilute solutions. However, the substrate and product particles are in nonequilibrium states and are possibly charged, and their contributions to the total electrostatic field cannot be explicitly studied in the Poisson-Boltzmann model. Hence the influences of substrate and product on reaction rate coefficient were not known. In this work, we consider all the charged species, including the charged substrate, product, and mobile salt ions in a Poisson-Nernst-Planck model, and then compare the results with previous work. The results indicate that both the charged substrate and product can significantly influence the reaction rate coefficient with different behaviors under different setups of computational conditions. It is interesting to find that when substrate and product are both considered, under an overall neutral boundary condition for all the bulk charged species, the computed reaction rate kinetics recovers a similar Debye-Hückel limiting law again. This phenomenon implies that the charged product counteracts the influence of charged substrate on reaction rate coefficient. Our analysis discloses the fact that the total charge concentration of substrate and product, though in a nonequilibrium state individually, obeys an equilibrium Boltzmann distribution, and therefore contributes as a normal charged ion species to ionic strength. This explains why the Debye-Hückel limiting law still works in a considerable range of conditions even though the effects of charged substrate and product particles are not specifically and explicitly considered in the theory.

  15. Estimating the number of double-strand breaks formed during meiosis from partial observation.

    PubMed

    Toyoizumi, Hiroshi; Tsubouchi, Hideo

    2012-12-01

    Analyzing the basic mechanism of DNA double-strand breaks (DSB) formation during meiosis is important for understanding sexual reproduction and genetic diversity. The location and amount of meiotic DSBs can be examined by using a common molecular biological technique called Southern blotting, but only a subset of the total DSBs can be observed; only DSB fragments still carrying the region recognized by a Southern blot probe are detected. With the assumption that DSB formation follows a nonhomogeneous Poisson process, we propose two estimators of the total number of DSBs on a chromosome: (1) an estimator based on the Nelson-Aalen estimator, and (2) an estimator based on a record value process. Further, we compared their asymptotic accuracy.

  16. Electrostatic forces in the Poisson-Boltzmann systems

    NASA Astrophysics Data System (ADS)

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-09-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guess, T.R.; Wischmann, K.B.; Stavig, M.E.

    Tensile properties were measured for nineteen different formulations of epoxy encapsulating materials. Formulations were of different combinations of two neat resins (Epon 828 and Epon 826, with and without CTBN modification), three fillers (ALOX, GNM and mica) and four hardeners (Z, DEA, DETDA-SA and ANH-2). Five of the formulations were tested at -55, -20, 20 and 60C, one formulation at -55, 20 and 71C; and the remaining formulations at 20C. Complete stress-strain curves are presented along with tables of tensile strength, initial modulus and Poisson`s ratio. The stress-strain responses are nonlinear and are temperature dependent. The reported data provide informationmore » for comparing the mechanical properties of encapsulants containing the suspected carcinogen Shell Z with the properties of encapsulants containing noncarcinogenic hardeners. Also, calculated shear moduli, based on measured tensile moduli and Poisson`s ratio, are in very good agreement with reported shear moduli from experimental torsional pendulum tests.« less

  18. Prediction accident triangle in maintenance of underground mine facilities using Poisson distribution analysis

    NASA Astrophysics Data System (ADS)

    Khuluqi, M. H.; Prapdito, R. R.; Sambodo, F. P.

    2018-04-01

    In Indonesia, mining is categorized as a hazardous industry. In recent years, a dramatic increase of mining equipment and technological complexities had resulted in higher maintenance expectations that accompanied by the changes in the working conditions, especially on safety. Ensuring safety during the process of conducting maintenance works in underground mine is important as an integral part of accident prevention programs. Accident triangle has provided a support to safety practitioner to draw a road map in preventing accidents. Poisson distribution is appropriate for the analysis of accidents at a specific site in a given time period. Based on the analysis of accident statistics in the underground mine maintenance of PT. Freeport Indonesia from 2011 through 2016, it is found that 12 minor accidents for 1 major accident and 66 equipment damages for 1 major accident as a new value of accident triangle. The result can be used for the future need for improving the accident prevention programs.

  19. Probabilistic reasoning in data analysis.

    PubMed

    Sirovich, Lawrence

    2011-09-20

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.

  20. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  1. Compound Poisson Law for Hitting Times to Periodic Orbits in Two-Dimensional Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Carney, Meagan; Nicol, Matthew; Zhang, Hong-Kun

    2017-11-01

    We show that a compound Poisson distribution holds for scaled exceedances of observables φ uniquely maximized at a periodic point ζ in a variety of two-dimensional hyperbolic dynamical systems with singularities (M,T,μ ), including the billiard maps of Sinai dispersing billiards in both the finite and infinite horizon case. The observable we consider is of form φ (z)=-ln d(z,ζ ) where d is a metric defined in terms of the stable and unstable foliation. The compound Poisson process we obtain is a Pólya-Aeppli distibution of index θ . We calculate θ in terms of the derivative of the map T. Furthermore if we define M_n=\\max {φ ,\\ldots ,φ circ T^n} and u_n (τ ) by \\lim _{n→ ∞} nμ (φ >u_n (τ ) )=τ the maximal process satisfies an extreme value law of form μ (M_n ≤ u_n)=e^{-θ τ }. These results generalize to a broader class of functions maximized at ζ , though the formulas regarding the parameters in the distribution need to be modified.

  2. Overdispersion of the Molecular Clock: Temporal Variation of Gene-Specific Substitution Rates in Drosophila

    PubMed Central

    Hartl, Daniel L.

    2008-01-01

    Simple models of molecular evolution assume that sequences evolve by a Poisson process in which nucleotide or amino acid substitutions occur as rare independent events. In these models, the expected ratio of the variance to the mean of substitution counts equals 1, and substitution processes with a ratio greater than 1 are called overdispersed. Comparing the genomes of 10 closely related species of Drosophila, we extend earlier evidence for overdispersion in amino acid replacements as well as in four-fold synonymous substitutions. The observed deviation from the Poisson expectation can be described as a linear function of the rate at which substitutions occur on a phylogeny, which implies that deviations from the Poisson expectation arise from gene-specific temporal variation in substitution rates. Amino acid sequences show greater temporal variation in substitution rates than do four-fold synonymous sequences. Our findings provide a general phenomenological framework for understanding overdispersion in the molecular clock. Also, the presence of substantial variation in gene-specific substitution rates has broad implications for work in phylogeny reconstruction and evolutionary rate estimation. PMID:18480070

  3. Universality in the distance between two teams in a football tournament

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Dahmen, Silvio R.

    2014-03-01

    Is football (soccer) a universal sport? Beyond the question of geographical distribution, where the answer is most certainly yes, when looked at from a mathematical viewpoint the scoring process during a match can be thought of, in a first approximation, as being modeled by a Poisson distribution. Recently, it was shown that the scoring of real tournaments can be reproduced by means of an agent-based model (da Silva et al. (2013) [24]) based on two simple hypotheses: (i) the ability of a team to win a match is given by the rate of a Poisson distribution that governs its scoring during a match; and (ii) such ability evolves over time according to results of previous matches. In this article we are interested in the question of whether the time series represented by the scores of teams have universal properties. For this purpose we define a distance between two teams as the square root of the sum of squares of the score differences between teams over all rounds in a double-round-robin-system and study how this distance evolves over time. Our results suggest a universal distance distribution of tournaments of different major leagues which is better characterized by an exponentially modified Gaussian (EMG). This result is corroborated by our agent-based model.

  4. Tunneling current noise spectra of biased impurity with a phonon mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maslova, N. S.; Arseev, P. I.; Mantsevich, V. N., E-mail: vmantsev@gmail.com

    We report the results of theoretical investigations of the tunneling current noise spectra through a single-level impurity both in the presence and in the absence of electron–phonon interaction based on the nonequilibrium Green’s functions formalism. We show that due to the quantum nature of tunneling, the Fano factor is dramatically different from the Poisson limit both in the presence and in the absence of inelastic processes. The results are demonstrated to be sensitive to the tunneling contact parameters.

  5. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  6. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  7. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  8. Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.

    PubMed

    Fang, Hongyan; Zhang, Hong; Yang, Yaning

    2016-07-01

    Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.

  9. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  10. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    PubMed

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  11. Intrinsic delay of permeable base transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wenchao; Guo, Jing; So, Franky

    2014-07-28

    Permeable base transistors (PBTs) fabricated by vacuum deposition or solution process have the advantages of easy fabrication and low power operation and are a promising device structure for flexible electronics. Intrinsic delay of PBT, which characterizes the speed of the transistor, is investigated by solving the three-dimensional Poisson equation and drift-diffusion equation self-consistently using finite element method. Decreasing the emitter thickness lowers the intrinsic delay by improving on-current, and a thinner base is also preferred for low intrinsic delay because of fewer carriers in the base region at off-state. The intrinsic delay exponentially decreases as the emitter contact Schottky barriermore » height decreases, and it linearly depends on the carrier mobility. With an optimized emitter contact barrier height and device geometry, a sub-nano-second intrinsic delay can be achieved with a carrier mobility of ∼10 cm{sup 2}/V/s obtainable in solution processed indium gallium zinc oxide, which indicates the potential of solution processed PBTs for GHz operations.« less

  12. Mixture model to assess the extent of cross-transmission of multidrug-resistant pathogens in hospitals.

    PubMed

    Mikolajczyk, Rafael T; Kauermann, Göran; Sagel, Ulrich; Kretzschmar, Mirjam

    2009-08-01

    Creation of a mixture model based on Poisson processes for assessment of the extent of cross-transmission of multidrug-resistant pathogens in the hospital. We propose a 2-component mixture of Poisson processes to describe the time series of detected cases of colonization. The first component describes the admission process of patients with colonization, and the second describes the cross-transmission. The data set used to illustrate the method consists of the routinely collected records for methicillin-resistant Staphylococcus aureus (MRSA), imipenem-resistant Pseudomonas aeruginosa, and multidrug-resistant Acinetobacter baumannii over a period of 3 years in a German tertiary care hospital. For MRSA and multidrug-resistant A. baumannii, cross-transmission was estimated to be responsible for more than 80% of cases; for imipenem-resistant P. aeruginosa, cross-transmission was estimated to be responsible for 59% of cases. For new cases observed within a window of less than 28 days for MRSA and multidrug-resistant A. baumannii or 40 days for imipenem-resistant P. aeruginosa, there was a 50% or greater probability that the cause was cross-transmission. The proposed method offers a solution to assessing of the extent of cross-transmission, which can be of clinical use. The method can be applied using freely available software (the package FlexMix in R) and it requires relatively little data.

  13. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  14. Conditional analysis of mixed Poisson processes with baseline counts: implications for trial design and analysis.

    PubMed

    Cook, Richard J; Wei, Wei

    2003-07-01

    The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).

  15. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  17. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  18. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  19. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  20. Yes, the GIGP Really Does Work--And Is Workable!

    ERIC Educational Resources Information Center

    Burrell, Quentin L.; Fenton, Michael R.

    1993-01-01

    Discusses the generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling. Negative binomial distribution is discussed, construction of the GIGP process is explained, zero-truncated GIGP is considered, and applications of the process with journals, library circulation statistics, and database index terms are described. (50…

  1. Exploring the evolutionary mechanism of complex supply chain systems using evolving hypergraphs

    NASA Astrophysics Data System (ADS)

    Suo, Qi; Guo, Jin-Li; Sun, Shiwei; Liu, Han

    2018-01-01

    A new evolutionary model is proposed to describe the characteristics and evolution pattern of supply chain systems using evolving hypergraphs, in which nodes represent enterprise entities while hyperedges represent the relationships among diverse trades. The nodes arrive at the system in accordance with a Poisson process, with the evolving process incorporating the addition of new nodes, linking of old nodes, and rewiring of links. Grounded in the Poisson process theory and continuum theory, the stationary average hyperdegree distribution is shown to follow a shifted power law (SPL), and the theoretical predictions are consistent with the results of numerical simulations. Testing the impact of parameters on the model yields a positive correlation between hyperdegree and degree. The model also uncovers macro characteristics of the relationships among enterprises due to the microscopic interactions among individuals.

  2. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  3. No Control Genes Required: Bayesian Analysis of qRT-PCR Data

    PubMed Central

    Matz, Mikhail V.; Wright, Rachel M.; Scott, James G.

    2013-01-01

    Background Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. Results In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the “classic” analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Conclusions Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R. PMID:23977043

  4. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa

    PubMed Central

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P.J.; Ingram, Ben R.

    2015-01-01

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs. PMID:25729318

  5. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    PubMed

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  6. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa.

    PubMed

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.

  7. Inhomogeneous Poisson process rate function inference from dead-time limited observations.

    PubMed

    Verma, Gunjan; Drost, Robert J

    2017-05-01

    The estimation of an inhomogeneous Poisson process (IHPP) rate function from a set of process observations is an important problem arising in optical communications and a variety of other applications. However, because of practical limitations of detector technology, one is often only able to observe a corrupted version of the original process. In this paper, we consider how inference of the rate function is affected by dead time, a period of time after the detection of an event during which a sensor is insensitive to subsequent IHPP events. We propose a flexible nonparametric Bayesian approach to infer an IHPP rate function given dead-time limited process realizations. Simulation results illustrate the effectiveness of our inference approach and suggest its ability to extend the utility of existing sensor technology by permitting more accurate inference on signals whose observations are dead-time limited. We apply our inference algorithm to experimentally collected optical communications data, demonstrating the practical utility of our approach in the context of channel modeling and validation.

  8. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  9. Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market

    NASA Astrophysics Data System (ADS)

    Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako

    Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.

  10. General solution of the chemical master equation and modality of marginal distributions for hierarchic first-order reaction networks.

    PubMed

    Reis, Matthias; Kromer, Justus A; Klipp, Edda

    2018-01-20

    Multimodality is a phenomenon which complicates the analysis of statistical data based exclusively on mean and variance. Here, we present criteria for multimodality in hierarchic first-order reaction networks, consisting of catalytic and splitting reactions. Those networks are characterized by independent and dependent subnetworks. First, we prove the general solvability of the Chemical Master Equation (CME) for this type of reaction network and thereby extend the class of solvable CME's. Our general solution is analytical in the sense that it allows for a detailed analysis of its statistical properties. Given Poisson/deterministic initial conditions, we then prove the independent species to be Poisson/binomially distributed, while the dependent species exhibit generalized Poisson/Khatri Type B distributions. Generalized Poisson/Khatri Type B distributions are multimodal for an appropriate choice of parameters. We illustrate our criteria for multimodality by several basic models, as well as the well-known two-stage transcription-translation network and Bateman's model from nuclear physics. For both examples, multimodality was previously not reported.

  11. Dendritic polyelectrolytes as seen by the Poisson-Boltzmann-Flory theory.

    PubMed

    Kłos, J S; Milewski, J

    2018-06-20

    G3-G9 dendritic polyelectrolytes accompanied by counterions are investigated using the Poisson-Boltzmann-Flory theory. Within this approach we solve numerically the Poisson-Boltzmann equation for the mean electrostatic potential and minimize the Poisson-Boltzmann-Flory free energy with respect to the size of the molecules. Such a scheme enables us to inspect the conformational and electrostatic properties of the dendrimers in equilibrium based on their response to varying the dendrimer generation. The calculations indicate that the G3-G6 dendrimers exist in the polyelectrolyte regime where absorption of counterions into the volume of the molecules is minor. Trapping of ions in the interior region becomes significant for the G7-G9 dendrimers and signals the emergence of the osmotic regime. We find that the behavior of the dendritic polyelectrolytes corresponds with the degree of ion trapping. In particular, in both regimes the polyelectrolytes are swollen as compared to their neutral counterparts and the expansion factor is maximal at the crossover generation G7.

  12. Quasi-neutral limit of Euler–Poisson system of compressible fluids coupled to a magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, Jianwei

    2018-06-01

    In this paper, we consider the quasi-neutral limit of a three-dimensional Euler-Poisson system of compressible fluids coupled to a magnetic field. We prove that, as Debye length tends to zero, periodic initial-value problems of the model have unique smooth solutions existing in the time interval where the ideal incompressible magnetohydrodynamic equations has smooth solution. Meanwhile, it is proved that smooth solutions converge to solutions of incompressible magnetohydrodynamic equations with a sharp convergence rate in the process of quasi-neutral limit.

  13. Fission meter and neutron detection using poisson distribution comparison

    DOEpatents

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  14. Method for resonant measurement

    DOEpatents

    Rhodes, G.W.; Migliori, A.; Dixon, R.D.

    1996-03-05

    A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.

  15. Characteristics of service requests and service processes of fire and rescue service dispatch centers: analysis of real world data and the underlying probability distributions.

    PubMed

    Krueger, Ute; Schimmelpfeng, Katja

    2013-03-01

    A sufficient staffing level in fire and rescue dispatch centers is crucial for saving lives. Therefore, it is important to estimate the expected workload properly. For this purpose, we analyzed whether a dispatch center can be considered as a call center. Current call center publications very often model call arrivals as a non-homogeneous Poisson process. This bases on the underlying assumption of the caller's independent decision to call or not to call. In case of an emergency, however, there are often calls from more than one person reporting the same incident and thus, these calls are not independent. Therefore, this paper focuses on the dependency of calls in a fire and rescue dispatch center. We analyzed and evaluated several distributions in this setting. Results are illustrated using real-world data collected from a typical German dispatch center in Cottbus ("Leitstelle Lausitz"). We identified the Pólya distribution as being superior to the Poisson distribution in describing the call arrival rate and the Weibull distribution to be more suitable than the exponential distribution for interarrival times and service times. However, the commonly used distributions offer acceptable approximations. This is important for estimating a sufficient staffing level in practice using, e.g., the Erlang-C model.

  16. Exploring the existence of a stayer population with mover-stayer counting process models: application to joint damage in psoriatic arthritis.

    PubMed

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2017-08-01

    Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.

  17. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  18. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less

  19. A new method for extracting near-surface mass-density anomalies from land-based gravity data, based on a special case of Poisson's PDE at the Earth's surface: A case study of salt diapirs in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Y.; Safari, A.; Ardalan, A.; Bahroudi, A.

    2015-12-01

    The current research provides a method for tracking near-surface mass-density anomalies via using only land-based gravity data, which is based on a special version of Poisson's Partial Differential Equation (PDE) of the gravitational field at Earth's surface. The research demonstrates how the Poisson's PDE can provide us with a capability to extract the near-surface mass-density anomalies from land-based gravity data. Herein, this version of the Poisson's PDE is mathematically introduced to the Earth's surface and then it is used to develop the new method for approximating the mass-density via derivatives of the Earth's gravitational field (i.e. via the gradient tensor). Herein, the author believes that the PDE can give us new knowledge about the behavior of the Earth's gravitational field at the Earth's surface which can be so useful for developing new methods of Earth's mass-density determination. In a case study, the proposed method is applied to a set of gravity stations located in the south of Iran. The results were numerically validated via certain knowledge about the geological structures in the area of the case study. Also, the method was compared with two standard methods of mass-density determination. All the numerical experiments show that the proposed approach is well-suited for tracking near-surface mass-density anomalies via using only the gravity data. Finally, the approach is also applied to some petroleum exploration studies of salt diapirs in the south of Iran.

  20. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  1. Generating clustered scale-free networks using Poisson based localization of edges

    NASA Astrophysics Data System (ADS)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  2. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  3. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  4. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  5. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  6. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  7. Marginalized zero-inflated Poisson models with missing covariates.

    PubMed

    Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan

    2018-05-11

    Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Derivation of kinetic equations from non-Wiener stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Basharov, A. M.

    2013-12-01

    Kinetic differential-difference equations containing terms with fractional derivatives and describing α -stable Levy processes with 0 < α < 1 have been derived in a unified manner in terms of one-dimensional stochastic differential equations controlled merely by the Poisson processes.

  9. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  10. Poisson's ratio of collagen fibrils measured by small angle X-ray scattering of strained bovine pericardium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Hannah C.; Sizeland, Katie H.; Kayed, Hanan R.

    Type I collagen is the main structural component of skin, tendons, and skin products, such as leather. Understanding the mechanical performance of collagen fibrils is important for understanding the mechanical performance of the tissues that they make up, while the mechanical properties of bulk tissue are well characterized, less is known about the mechanical behavior of individual collagen fibrils. In this study, bovine pericardium is subjected to strain while small angle X-ray scattering (SAXS) patterns are recorded using synchrotron radiation. The change in d-spacing, which is a measure of fibril extension, and the change in fibril diameter are determined frommore » SAXS. The tissue is strained 0.25 (25%) with a corresponding strain in the collagen fibrils of 0.045 observed. The ratio of collagen fibril width contraction to length extension, or the Poisson's ratio, is 2.1 ± 0.7 for a tissue strain from 0 to 0.25. This Poisson's ratio indicates that the volume of individual collagen fibrils decreases with increasing strain, which is quite unlike most engineering materials. This high Poisson's ratio of individual fibrils may contribute to high Poisson's ratio observed for tissues, contributing to some of the remarkable properties of collagen-based materials.« less

  11. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes II: Size Effects on Ionic Distributions and Diffusion-Reaction Rates

    PubMed Central

    Lu, Benzhuo; Zhou, Y.C.

    2011-01-01

    The effects of finite particle size on electrostatics, density profiles, and diffusion have been a long existing topic in the study of ionic solution. The previous size-modified Poisson-Boltzmann and Poisson-Nernst-Planck models are revisited in this article. In contrast to many previous works that can only treat particle species with a single uniform size or two sizes, we generalize the Borukhov model to obtain a size-modified Poisson-Nernst-Planck (SMPNP) model that is able to treat nonuniform particle sizes. The numerical tractability of the model is demonstrated as well. The main contributions of this study are as follows. 1), We show that an (arbitrarily) size-modified PB model is indeed implied by the SMPNP equations under certain boundary/interface conditions, and can be reproduced through numerical solutions of the SMPNP. 2), The size effects in the SMPNP effectively reduce the densities of highly concentrated counterions around the biomolecule. 3), The SMPNP is applied to the diffusion-reaction process for the first time, to our knowledge. In the case of low substrate density near the enzyme reactive site, it is observed that the rate coefficients predicted by SMPNP model are considerably larger than those by the PNP model, suggesting both ions and substrates are subject to finite size effects. 4), An accurate finite element method and a convergent Gummel iteration are developed for the numerical solution of the completely coupled nonlinear system of SMPNP equations. PMID:21575582

  12. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  13. Environmental heterogeneity blurs the signature of dispersal syndromes on spatial patterns of woody species in a moist tropical forest

    PubMed Central

    Velázquez, Eduardo; Escudero, Adrián; de la Cruz, Marcelino

    2018-01-01

    We assessed the relative importance of dispersal limitation, environmental heterogeneity and their joint effects as determinants of the spatial patterns of 229 species in the moist tropical forest of Barro Colorado Island (Panama). We differentiated five types of species according to their dispersal syndrome; autochorous, anemochorous, and zoochorous species with small, medium-size and large fruits. We characterized the spatial patterns of each species and we checked whether they were best fitted by Inhomogeneous Poisson (IPP), Homogeneous Poisson cluster (HPCP) and Inhomogeneous Poisson cluster processes (IPCP) by means of the Akaike Information Criterion. We also assessed the influence of species’ dispersal mode in the average cluster size. We found that 63% of the species were best fitted by IPCP regardless of their dispersal syndrome, although anemochorous species were best described by HPCP. Our results indicate that spatial patterns of tree species in this forest cannot be explained only by dispersal limitation, but by the joint effects of dispersal limitation and environmental heterogeneity. The absence of relationships between dispersal mode and degree of clustering suggests that several processes modify the original spatial pattern generated by seed dispersal. These findings emphasize the importance of fitting point process models with a different biological meaning when studying the main determinants of spatial structure in plant communities. PMID:29451871

  14. 77 FR 13691 - Qualification of Drivers; Exemption Applications; Vision

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ..., ocular hypertension, retinal detachment, cataracts and corneal scaring. In most cases, their eye... Application of Multiple Regression Analysis of a Poisson Process,'' Journal of American Statistical...

  15. Internal Stress Distribution Measurement of TIG Welded SUS304 Samples Using Neutron Diffraction Technique

    NASA Astrophysics Data System (ADS)

    Muslih, M. Refai; Sumirat, I.; Sairun; Purwanta

    2008-03-01

    The distribution of residual stress of SUS304 samples that were undergone TIG welding process with four different electric currents has been measured. The welding has been done in the middle part of the samples that was previously grooved by milling machine. Before they were welded the samples were annealed at 650 degree Celsius for one hour. The annealing process was done to eliminate residual stress generated by grooving process so that the residual stress within the samples was merely produced from welding process. The calculation of distribution of residual stress was carried out by measuring the strains within crystal planes of Fe(220) SUS304. Strain, Young modulus, and Poisson ratio of Fe(220) SUS304 were measured using DN1-M neutron diffractometer. Young modulus and Poisson ratio of Fe(220) SUS304 sample were measured in-situ. The result of calculations showed that distribution of residual stress of SUS304 in the vicinity of welded area is influenced both by treatments given at the samples-making process and by the electric current used during welding process.

  16. An Evaluation of Psychophysical Models of Auditory Change Perception

    PubMed Central

    Micheyl, Christophe; Kaernbach, Christian; Demany, Laurent

    2009-01-01

    In many psychophysical experiments, the participant's task is to detect small changes along a given stimulus dimension, or to identify the direction (e.g., upward vs. downward) of such changes. The results of these experiments are traditionally analyzed using a constant-variance Gaussian (CVG) model or a high-threshold (HT) model. Here, the authors demonstrate that for changes along three basic sound dimensions (frequency, intensity, and amplitude-modulation rate), such models cannot account for the observed relationship between detection thresholds and direction-identification thresholds. It is shown that two alternative models can account for this relationship. One of them is based on the idea of sensory “quanta”; the other assumes that small changes are detected on the basis of Poisson processes with low means. The predictions of these two models are then compared against receiver operating characteristics (ROCs) for the detection of changes in sound intensity. It is concluded that human listeners' perception of small and unidimensional acoustic changes is better described by a discrete-state Poisson model than by the more commonly used CVG model or by the less favored HT and quantum models. PMID:18954215

  17. Network based approaches reveal clustering in protein point patterns

    NASA Astrophysics Data System (ADS)

    Parker, Joshua; Barr, Valarie; Aldridge, Joshua; Samelson, Lawrence E.; Losert, Wolfgang

    2014-03-01

    Recent advances in super-resolution imaging have allowed for the sub-diffraction measurement of the spatial location of proteins on the surfaces of T-cells. The challenge is to connect these complex point patterns to the internal processes and interactions, both protein-protein and protein-membrane. We begin analyzing these patterns by forming a geometric network amongst the proteins and looking at network measures, such the degree distribution. This allows us to compare experimentally observed patterns to models. Specifically, we find that the experimental patterns differ from heterogeneous Poisson processes, highlighting an internal clustering structure. Further work will be to compare our results to simulated protein-protein interactions to determine clustering mechanisms.

  18. Quantitative model of price diffusion and market friction based on trading as a mechanistic random process.

    PubMed

    Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-14

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  19. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    NASA Astrophysics Data System (ADS)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  20. Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János

    2014-01-01

    Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.

  1. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  2. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  3. Simulation of diffuse-charge capacitance in electric double layer capacitors

    NASA Astrophysics Data System (ADS)

    Sun, Ning; Gersappe, Dilip

    2017-01-01

    We use a Lattice Boltzmann Model (LBM) in order to simulate diffuse-charge dynamics in Electric Double Layer Capacitors (EDLCs). Simulations are carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). The steric effect of concentrated solutions is considered by using a Modified Poisson-Nernst-Planck (MPNP) equations and compared with regular Poisson-Nernst-Planck (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. Our studies show how electrode morphology can be used to tailor the properties of supercapacitors.

  4. Rate of occurrence of failures based on a nonhomogeneous Poisson process: an ozone analyzer case study.

    PubMed

    de Moura Xavier, José Carlos; de Andrade Azevedo, Irany; de Sousa Junior, Wilson Cabral; Nishikawa, Augusto

    2013-02-01

    Atmospheric pollutant monitoring constitutes a primordial activity in public policies concerning air quality. In São Paulo State, Brazil, the São Paulo State Environment Company (CETESB) maintains an automatic network which continuously monitors CO, SO(2), NO(x), O(3), and particulate matter concentrations in the air. The monitoring process accuracy is a fundamental condition for the actions to be taken by CETESB. As one of the support systems, a preventive maintenance program for the different analyzers used is part of the data quality strategy. Knowledge of the behavior of analyzer failure times could help optimize the program. To achieve this goal, the failure times of an ozone analyzer-considered a repairable system-were modeled by means of the nonhomogeneous Poisson process. The rate of occurrence of failures (ROCOF) was estimated for the intervals 0-70,800 h and 0-88,320 h, in which six and seven failures were observed, respectively. The results showed that the ROCOF estimate is influenced by the choice of the observation period, t(0) = 70,800 h and t(7) = 88,320 h in the cases analyzed. Identification of preventive maintenance actions, mainly when parts replacement occurs in the last interval of observation, is highlighted, justifying the alteration in the behavior of the inter-arrival times. The performance of a follow-up on each analyzer is recommended in order to record the impact of the performed preventive maintenance program on the enhancement of its useful life.

  5. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.

  7. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  8. A multiscale filter for noise reduction of low-dose cone beam projections.

    PubMed

    Yao, Weiguang; Farr, Jonathan B

    2015-08-21

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  9. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  10. A physiologically based nonhomogeneous Poisson counter model of visual identification.

    PubMed

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren

    2018-04-30

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  13. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less

  14. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  15. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  16. Determination of oral mucosal Poisson's ratio and coefficient of friction from in-vivo contact pressure measurements.

    PubMed

    Chen, Junning; Suenaga, Hanako; Hogg, Michael; Li, Wei; Swain, Michael; Li, Qing

    2016-01-01

    Despite their considerable importance to biomechanics, there are no existing methods available to directly measure apparent Poisson's ratio and friction coefficient of oral mucosa. This study aimed to develop an inverse procedure to determine these two biomechanical parameters by utilizing in vivo experiment of contact pressure between partial denture and beneath mucosa through nonlinear finite element (FE) analysis and surrogate response surface (RS) modelling technique. First, the in vivo denture-mucosa contact pressure was measured by a tactile electronic sensing sheet. Second, a 3D FE model was constructed based on the patient CT images. Third, a range of apparent Poisson's ratios and the coefficients of friction from literature was considered as the design variables in a series of FE runs for constructing a RS surrogate model. Finally, the discrepancy between computed in silico and measured in vivo results was minimized to identify the best matching Poisson's ratio and coefficient of friction. The established non-invasive methodology was demonstrated effective to identify such biomechanical parameters of oral mucosa and can be potentially used for determining the biomaterial properties of other soft biological tissues.

  17. Electronic health record analysis via deep poisson factor models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henao, Ricardo; Lu, James T.; Lucas, Joseph E.

    Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less

  18. Electronic health record analysis via deep poisson factor models

    DOE PAGES

    Henao, Ricardo; Lu, James T.; Lucas, Joseph E.; ...

    2016-01-01

    Electronic Health Record (EHR) phenotyping utilizes patient data captured through normal medical practice, to identify features that may represent computational medical phenotypes. These features may be used to identify at-risk patients and improve prediction of patient morbidity and mortality. We present a novel deep multi-modality architecture for EHR analysis (applicable to joint analysis of multiple forms of EHR data), based on Poisson Factor Analysis (PFA) modules. Each modality, composed of observed counts, is represented as a Poisson distribution, parameterized in terms of hidden binary units. In-formation from different modalities is shared via a deep hierarchy of common hidden units. Activationmore » of these binary units occurs with probability characterized as Bernoulli-Poisson link functions, instead of more traditional logistic link functions. In addition, we demon-strate that PFA modules can be adapted to discriminative modalities. To compute model parameters, we derive efficient Markov Chain Monte Carlo (MCMC) inference that scales efficiently, with significant computational gains when compared to related models based on logistic link functions. To explore the utility of these models, we apply them to a subset of patients from the Duke-Durham patient cohort. We identified a cohort of over 12,000 patients with Type 2 Diabetes Mellitus (T2DM) based on diagnosis codes and laboratory tests out of our patient population of over 240,000. Examining the common hidden units uniting the PFA modules, we identify patient features that represent medical concepts. Experiments indicate that our learned features are better able to predict mortality and morbidity than clinical features identified previously in a large-scale clinical trial.« less

  19. On the Singularity of the Vlasov-Poisson System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  20. On the singularity of the Vlasov-Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  1. Some functional limit theorems for compound Cox processes

    NASA Astrophysics Data System (ADS)

    Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.

    2016-06-01

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  2. Some functional limit theorems for compound Cox processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.

    2016-06-08

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  3. Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods

    PubMed Central

    Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.

    2017-01-01

    The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537

  4. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    PubMed Central

    2013-01-01

    Background The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Results Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. Conclusions In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. PMID:23442253

  5. Stochastic Processes as True-Score Models for Highly Speeded Mental Tests.

    ERIC Educational Resources Information Center

    Moore, William E.

    The previous theoretical development of the Poisson process as a strong model for the true-score theory of mental tests is discussed, and additional theoretical properties of the model from the standpoint of individual examinees are developed. The paper introduces the Erlang process as a family of test theory models and shows in the context of…

  6. Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri

    2018-07-01

    This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.

  7. Diffuse sorption modeling.

    PubMed

    Pivovarov, Sergey

    2009-04-01

    This work presents a simple solution for the diffuse double layer model, applicable to calculation of surface speciation as well as to simulation of ionic adsorption within the diffuse layer of solution in arbitrary salt media. Based on Poisson-Boltzmann equation, the Gaines-Thomas selectivity coefficient for uni-bivalent exchange on clay, K(GT)(Me(2+)/M(+))=(Q(Me)(0.5)/Q(M)){M(+)}/{Me(2+)}(0.5), (Q is the equivalent fraction of cation in the exchange capacity, and {M(+)} and {Me(2+)} are the ionic activities in solution) may be calculated as [surface charge, mueq/m(2)]/0.61. The obtained solution of the Poisson-Boltzmann equation was applied to calculation of ionic exchange on clays and to simulation of the surface charge of ferrihydrite in 0.01-6 M NaCl solutions. In addition, a new model of acid-base properties was developed. This model is based on assumption that the net proton charge is not located on the mathematical surface plane but diffusely distributed within the subsurface layer of the lattice. It is shown that the obtained solution of the Poisson-Boltzmann equation makes such calculations possible, and that this approach is more efficient than the original diffuse double layer model.

  8. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  9. Fractional models of seismoacoustic and electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Shevtsov, Boris; Sheremetyeva, Olga

    2017-10-01

    Statistical models of the seismoacoustic and electromagnetic activity caused by deformation disturbances are considered on the basis of compound Poisson process and its fractional generalizations. Wave representations of these processes are used too. It is discussed five regimes of deformation activity and their role in understanding of the earthquakes precursors nature.

  10. Library Book Circulation and the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Gelman, E.; Sichel, H. S.

    1987-01-01

    Argues that library book circulation is a binomial rather than a Poisson process, and that individual book popularities are continuous beta distributions. Three examples demonstrate the superiority of beta over negative binomial distribution, and it is suggested that a bivariate-binomial process would be helpful in predicting future book…

  11. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  12. Atomic clocks and the continuous-time random-walk

    NASA Astrophysics Data System (ADS)

    Formichella, Valerio; Camparo, James; Tavella, Patrizia

    2017-11-01

    Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.

  13. STDP allows fast rate-modulated coding with Poisson-like spike trains.

    PubMed

    Gilson, Matthieu; Masquelier, Timothée; Hugues, Etienne

    2011-10-01

    Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (~10-20 ms) for sufficiently many inputs (~100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.

  14. STDP Allows Fast Rate-Modulated Coding with Poisson-Like Spike Trains

    PubMed Central

    Hugues, Etienne

    2011-01-01

    Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks. PMID:22046113

  15. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    PubMed

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  16. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  17. Predicting rates of inbreeding in populations undergoing selection.

    PubMed Central

    Woolliams, J A; Bijma, P

    2000-01-01

    Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074

  18. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  19. Poisson property of the occurrence of flip-flops in a model membrane.

    PubMed

    Arai, Noriyoshi; Akimoto, Takuma; Yamamoto, Eiji; Yasui, Masato; Yasuoka, Kenji

    2014-02-14

    How do lipid molecules in membranes perform a flip-flop? The flip-flops of lipid molecules play a crucial role in the formation and flexibility of membranes. However, little has been determined about the behavior of flip-flops, either experimentally, or in molecular dynamics simulations. Here, we provide numerical results of the flip-flops of model lipid molecules in a model membrane and investigate the statistical properties, using millisecond-order coarse-grained molecular simulations (dissipative particle dynamics). We find that there are three different ways of flip-flops, which can be clearly characterized by their paths on the free energy surface. Furthermore, we found that the probability of the number of the flip-flops is well fitted by the Poisson distribution, and the probability density function for the inter-occurrence times of flip-flops coincides with that of the forward recurrence times. These results indicate that the occurrence of flip-flops is a Poisson process, which will play an important role in the flexibilities of membranes.

  20. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  1. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  2. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

    PubMed

    Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

    2012-05-14

    Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

  3. Smooth invariant densities for random switching on the torus

    NASA Astrophysics Data System (ADS)

    Bakhtin, Yuri; Hurth, Tobias; Lawley, Sean D.; Mattingly, Jonathan C.

    2018-04-01

    We consider a random dynamical system obtained by switching between the flows generated by two smooth vector fields on the 2d-torus, with the random switchings happening according to a Poisson process. Assuming that the driving vector fields are transversal to each other at all points of the torus and that each of them allows for a smooth invariant density and no periodic orbits, we prove that the switched system also has a smooth invariant density, for every switching rate. Our approach is based on an integration by parts formula inspired by techniques from Malliavin calculus.

  4. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  5. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  6. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  7. Modeling bursts and heavy tails in human dynamics

    NASA Astrophysics Data System (ADS)

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(τw)˜τw-α with α=3/2 . The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α=1 . We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α=1 , the surface mail based communication belongs to the α=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  8. Modeling bursts and heavy tails in human dynamics.

    PubMed

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can handle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(tau(w)) approximately tau(w)(-alpha) with alpha=3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display alpha=1, the surface mail based communication belongs to the alpha=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  9. A statistical approach for inferring the 3D structure of the genome.

    PubMed

    Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe

    2014-06-15

    Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.

  10. Saint-Venant end effects for materials with negative Poisson's ratios

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.

    1992-01-01

    Results are presented from an analysis of Saint-Venant end effects for materials with negative Poisson's ratio. Examples are presented showing that slow decay of end stress occurs in circular cylinders of negative Poisson's ratio, whereas a sandwich panel containing rigid face sheets and a compliant core exhibits no anomalous effects for negative Poisson's ratio (but exhibits slow stress decay for core Poisson's ratios approaching 0.5). In sand panels with stiff but not perfectly rigid face sheets, a negative Poisson's ratio results in end stress decay, which is faster than it would be otherwise. It is suggested that the slow decay previously predicted for sandwich strips in plane deformation as a result of the geometry can be mitigated by the use of a negative Poisson's ratio material for the core.

  11. Poisson's ratio of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Christiansson, Henrik; Helsing, Johan

    1996-05-01

    Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.

  12. Statistical properties of a filtered Poisson process with additive random noise: distributions, correlations and moment estimation

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; E Garcia, O.; Rypdal, M.

    2017-05-01

    Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.

  13. AN EFFICIENT HIGHER-ORDER FAST MULTIPOLE BOUNDARY ELEMENT SOLUTION FOR POISSON-BOLTZMANN BASED MOLECULAR ELECTROSTATICS

    PubMed Central

    Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander

    2011-01-01

    In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123

  14. Double dynamic scaling in human communication dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Shengfeng; Feng, Xin; Wu, Ye; Xiao, Jinhua

    2017-05-01

    In the last decades, human behavior has been deeply understanding owing to the huge quantities data of human behavior available for study. The main finding in human dynamics shows that temporal processes consist of high-activity bursty intervals alternating with long low-activity periods. A model, assuming the initiator of bursty follow a Poisson process, is widely used in the modeling of human behavior. Here, we provide further evidence for the hypothesis that different bursty intervals are independent. Furthermore, we introduce a special threshold to quantitatively distinguish the time scales of complex dynamics based on the hypothesis. Our results suggest that human communication behavior is a composite process of double dynamics with midrange memory length. The method for calculating memory length would enhance the performance of many sequence-dependent systems, such as server operation and topic identification.

  15. SMPBS: Web server for computing biomolecular electrostatics using finite element solvers of size modified Poisson-Boltzmann equation.

    PubMed

    Xie, Yang; Ying, Jinyong; Xie, Dexuan

    2017-03-30

    SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Poisson Regression Analysis of Illness and Injury Surveillance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson variation. The R open source software environment for statistical computing and graphics is used for analysis. Additional details about R and the data that were used in this report are provided in an Appendix. Information on how to obtain R and utility functions that can be used to duplicate results in this report are provided.« less

  17. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  18. Influence of loading and unloading velocity of confining pressure on strength and permeability characteristics of crystalline sandstone

    NASA Astrophysics Data System (ADS)

    Zhang, Dong-ming; Yang, Yu-shun; Chu, Ya-pei; Zhang, Xiang; Xue, Yan-guang

    2018-06-01

    The triaxial compression test of crystalline sandstone under different loading and unloading velocity of confining pressure is carried out by using the self-made "THM coupled with servo-controlled seepage apparatus for containing-gas coal", analyzed the strength, deformation and permeability characteristics of the sample, the results show that: with the increase of confining pressures loading-unloading velocity, Mohr's stress circle center of the specimen shift to the right, and the ultimate intensity, peak strain and residual stress of the specimens increase gradually. With the decrease of unloading velocity of confining pressure, the axial strain, the radial strain and the volumetric strain of the sample decrease first and then increases, but the radial strain decreases more greatly. The loading and unloading of confining pressure has greater influence on axial strain of specimens. The deformation modulus decreases rapidly with the increase of axial strain and the Poisson's ratio decreases gradually at the initial stage of loading. When the confining pressure is loaded, the deformation modulus decrease gradually, and the Poisson's ratio increases gradually. When the confining pressure is unloaded, the deformation modulus increase gradually, and the Poisson's ratio decreases gradually. When the specimen reaches the ultimate intensity, the deformation modulus decreases rapidly, while the Poisson's ratio increases rapidly. The fitting curve of the confining pressure and the deformation modulus and the Poisson's ratio in accordance with the distribution of quadratic polynomial function in the loading-unloading confining pressure. There is a corresponding relationship between the evolution of rock permeability and damage deformation during the process of loading and unloading. In the late stage of yielding, the permeability increases slowly, and the permeability increases sharply after the rock sample is destroyed. Fitting the permeability and confining pressure conform to the variation law of the exponential function.

  19. A minimally-resolved immersed boundary model for reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar

    2013-12-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.

  20. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  1. Bayesian multivariate Poisson abundance models for T-cell receptor data.

    PubMed

    Greene, Joshua; Birtwistle, Marc R; Ignatowicz, Leszek; Rempala, Grzegorz A

    2013-06-07

    A major feature of an adaptive immune system is its ability to generate B- and T-cell clones capable of recognizing and neutralizing specific antigens. These clones recognize antigens with the help of the surface molecules, called antigen receptors, acquired individually during the clonal development process. In order to ensure a response to a broad range of antigens, the number of different receptor molecules is extremely large, resulting in a huge clonal diversity of both B- and T-cell receptor populations and making their experimental comparisons statistically challenging. To facilitate such comparisons, we propose a flexible parametric model of multivariate count data and illustrate its use in a simultaneous analysis of multiple antigen receptor populations derived from mammalian T-cells. The model relies on a representation of the observed receptor counts as a multivariate Poisson abundance mixture (m PAM). A Bayesian parameter fitting procedure is proposed, based on the complete posterior likelihood, rather than the conditional one used typically in similar settings. The new procedure is shown to be considerably more efficient than its conditional counterpart (as measured by the Fisher information) in the regions of m PAM parameter space relevant to model T-cell data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes—part III extensions and applications to kinetic theory and transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2017-08-01

    This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.

  3. On the statistical properties of viral misinformation in online social media

    NASA Astrophysics Data System (ADS)

    Bessi, Alessandro

    2017-03-01

    The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.

  4. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  5. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  6. Infinitesimal Deformations of a Formal Symplectic Groupoid

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2011-09-01

    Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.

  7. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  8. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  9. Collective Poisson process with periodic rates: applications in physics from micro-to nanodevices.

    PubMed

    da Silva, Roberto; Lamb, Luis C; Wirth, Gilson Inacio

    2011-01-28

    Continuous reductions in the dimensions of semiconductor devices have led to an increasing number of noise sources, including random telegraph signals (RTS) due to the capture and emission of electrons by traps at random positions between oxide and semiconductor. The models traditionally used for microscopic devices become of limited validity in nano- and mesoscale systems since, in such systems, distributed quantities such as electron and trap densities, and concepts like electron mobility, become inadequate to model electrical behaviour. In addition, current experimental works have shown that RTS in semiconductor devices based on carbon nanotubes lead to giant current fluctuations. Therefore, the physics of this phenomenon and techniques to decrease the amplitudes of RTS need to be better understood. This problem can be described as a collective Poisson process under different, but time-independent, rates, τ(c) and τ(e), that control the capture and emission of electrons by traps distributed over the oxide. Thus, models that consider calculations performed under time-dependent periodic capture and emission rates should be of interest in order to model more efficient devices. We show a complete theoretical description of a model that is capable of showing a noise reduction of current fluctuations in the time domain, and a reduction of the power spectral density in the frequency domain, in semiconductor devices as predicted by previous experimental work. We do so through numerical integrations and a novel Monte Carlo Markov chain (MCMC) algorithm based on microscopic discrete values. The proposed model also handles the ballistic regime, relevant in nano- and mesoscale devices. Finally, we show that the ballistic regime leads to nonlinearity in the electrical behaviour.

  10. Analysis of photon count data from single-molecule fluorescence experiments

    NASA Astrophysics Data System (ADS)

    Burzykowski, T.; Szubiakowski, J.; Rydén, T.

    2003-03-01

    We consider single-molecule fluorescence experiments with data in the form of counts of photons registered over multiple time-intervals. Based on the observation schemes, linking back to works by Dehmelt [Bull. Am. Phys. Soc. 20 (1975) 60] and Cook and Kimble [Phys. Rev. Lett. 54 (1985) 1023], we propose an analytical approach to the data based on the theory of Markov-modulated Poisson processes (MMPP). In particular, we consider maximum-likelihood estimation. The method is illustrated using a real-life dataset. Additionally, the properties of the proposed method are investigated through simulations and compared to two other approaches developed by Yip et al. [J. Phys. Chem. A 102 (1998) 7564] and Molski [Chem. Phys. Lett. 324 (2000) 301].

  11. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  12. Unimodularity criteria for Poisson structures on foliated manifolds

    NASA Astrophysics Data System (ADS)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  13. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  14. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  15. Protein-ion binding process on finite macromolecular concentration. A Poisson-Boltzmann and Monte Carlo study.

    PubMed

    de Carvalho, Sidney Jurado; Fenley, Márcia O; da Silva, Fernando Luís Barroso

    2008-12-25

    Electrostatic interactions are one of the key driving forces for protein-ligands complexation. Different levels for the theoretical modeling of such processes are available on the literature. Most of the studies on the Molecular Biology field are performed within numerical solutions of the Poisson-Boltzmann Equation and the dielectric continuum models framework. In such dielectric continuum models, there are two pivotal questions: (a) how the protein dielectric medium should be modeled, and (b) what protocol should be used when solving this effective Hamiltonian. By means of Monte Carlo (MC) and Poisson-Boltzmann (PB) calculations, we define the applicability of the PB approach with linear and nonlinear responses for macromolecular electrostatic interactions in electrolyte solution, revealing some physical mechanisms and limitations behind it especially due the raise of both macromolecular charge and concentration out of the strong coupling regime. A discrepancy between PB and MC for binding constant shifts is shown and explained in terms of the manner PB approximates the excess chemical potentials of the ligand, and not as a consequence of the nonlinear thermal treatment and/or explicit ion-ion interactions as it could be argued. Our findings also show that the nonlinear PB predictions with a low dielectric response well reproduce the pK shifts calculations carried out with an uniform dielectric model. This confirms and completes previous results obtained by both MC and linear PB calculations.

  16. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE PAGES

    Sanborn, Brett; Song, Bo

    2018-06-03

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  17. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanborn, Brett; Song, Bo

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  18. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.

    PubMed

    Feng, Jianyuan; Feng, Zhiyong

    2017-09-11

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.

  19. Repairable-conditionally repairable damage model based on dual Poisson processes.

    PubMed

    Lind, B K; Persson, L M; Edgren, M R; Hedlöf, I; Brahme, A

    2003-09-01

    The advent of intensity-modulated radiation therapy makes it increasingly important to model the response accurately when large volumes of normal tissues are irradiated by controlled graded dose distributions aimed at maximizing tumor cure and minimizing normal tissue toxicity. The cell survival model proposed here is very useful and flexible for accurate description of the response of healthy tissues as well as tumors in classical and truly radiobiologically optimized radiation therapy. The repairable-conditionally repairable (RCR) model distinguishes between two different types of damage, namely the potentially repairable, which may also be lethal, i.e. if unrepaired or misrepaired, and the conditionally repairable, which may be repaired or may lead to apoptosis if it has not been repaired correctly. When potentially repairable damage is being repaired, for example by nonhomologous end joining, conditionally repairable damage may require in addition a high-fidelity correction by homologous repair. The induction of both types of damage is assumed to be described by Poisson statistics. The resultant cell survival expression has the unique ability to fit most experimental data well at low doses (the initial hypersensitive range), intermediate doses (on the shoulder of the survival curve), and high doses (on the quasi-exponential region of the survival curve). The complete Poisson expression can be approximated well by a simple bi-exponential cell survival expression, S(D) = e(-aD) + bDe(-cD), where the first term describes the survival of undamaged cells and the last term represents survival after complete repair of sublethal damage. The bi-exponential expression makes it easy to derive D(0), D(q), n and alpha, beta values to facilitate comparison with classical cell survival models.

  20. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  1. A multiscale filter for noise reduction of low-dose cone beam projections

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Farr, Jonathan B.

    2015-08-01

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  2. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  3. MODEL FOR INSTANTANEOUS RESIDENTIAL WATER DEMANDS

    EPA Science Inventory

    Residential wateer use is visualized as a customer-server interaction often encountered in queueing theory. Individual customers are assumed to arrive according to a nonhomogeneous Poisson process, then engage water servers for random lengths of time. Busy servers are assumed t...

  4. Tailpulse signal generator

    DOEpatents

    Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA

    2009-06-23

    A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.

  5. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  6. Applications of MMPBSA to Membrane Proteins I: Efficient Numerical Solutions of Periodic Poisson-Boltzmann Equation

    PubMed Central

    Botello-Smith, Wesley M.; Luo, Ray

    2016-01-01

    Continuum solvent models have been widely used in biomolecular modeling applications. Recently much attention has been given to inclusion of implicit membrane into existing continuum Poisson-Boltzmann solvent models to extend their applications to membrane systems. Inclusion of an implicit membrane complicates numerical solutions of the underlining Poisson-Boltzmann equation due to the dielectric inhomogeneity on the boundary surfaces of a computation grid. This can be alleviated by the use of the periodic boundary condition, a common practice in electrostatic computations in particle simulations. The conjugate gradient and successive over-relaxation methods are relatively straightforward to be adapted to periodic calculations, but their convergence rates are quite low, limiting their applications to free energy simulations that require a large number of conformations to be processed. To accelerate convergence, the Incomplete Cholesky preconditioning and the geometric multi-grid methods have been extended to incorporate periodicity for biomolecular applications. Impressive convergence behaviors were found as in the previous applications of these numerical methods to tested biomolecules and MMPBSA calculations. PMID:26389966

  7. Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis

    PubMed Central

    2014-01-01

    When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008

  8. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  9. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Cracked rocks with positive and negative Poisson's ratio: real-crack properties extracted from pressure dependence of elastic-wave velocities

    NASA Astrophysics Data System (ADS)

    Zaitsev, Vladimir Y.; Radostin, Andrey V.; Dyskin, Arcady V.; Pasternak, Elena

    2017-04-01

    We report results of analysis of literature data on P- and S-wave velocities of rocks subjected to variable hydrostatic pressure. Out of about 90 examined samples, in more than 40% of the samples the reconstructed Poisson's ratios are negative for lowest confining pressure with gradual transition to the conventional positive values at higher pressure. The portion of rocks exhibiting negative Poisson's ratio appeared to be unexpectedly high. To understand the mechanism of negative Poisson's ratio, pressure dependences of P- and S-wave velocities were analyzed using the effective medium model in which the reduction in the elastic moduli due to cracks is described in terms of compliances with respect to shear and normal loading that are imparted to the rock by the presence of cracks. This is in contrast to widely used descriptions of effective cracked medium based on a specific crack model (e.g., penny-shape crack) in which the ratio between normal and shear compliances of such a crack is strictly predetermined. The analysis of pressure-dependences of the elastic wave velocities makes it possible to reveal the ratio between pure normal and shear compliances (called q-ratio below) for real defects and quantify their integral content in the rock. The examination performed demonstrates that a significant portion (over 50%) of cracks exhibit q-ratio several times higher than that assumed for the conventional penny-shape cracks. This leads to faster reduction of the Poisson's ratio with increasing the crack concentration. Samples with negative Poisson's ratio are characterized by elevated q-ratio and simultaneously crack concentration. Our results clearly indicate that the traditional crack model is not adequate for a significant portion of rocks and that the interaction between the opposite crack faces leading to domination of the normal compliance and reduced shear displacement discontinuity can play an important role in the mechanical behavior of rocks.

  11. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  12. Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data

    USGS Publications Warehouse

    Wikle, C.K.; Royle, J. Andrew

    2005-01-01

    Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.

  13. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  14. Based on records of Three Gorge Telemetric Seismic Network to analyze Vibration process of micro fracture of rock landslide

    NASA Astrophysics Data System (ADS)

    WANG, Q.

    2017-12-01

    Used the finite element analysis software GeoStudio to establish vibration analysis model of Qianjiangping landslide, which locates at the Three Gorges Reservoir area. In QUAKE/W module, we chosen proper Dynamic elasticity modulus and Poisson's ratio of soil layer and rock stratum. When loading, we selected the waveform data record of Three Gorge Telemetric Seismic Network as input ground motion, which includes five rupture events recorded of Lujiashan seismic station. In dynamic simulating, we mainly focused on sliding process when the earthquake date record was applied. The simulation result shows that Qianjiangping landslide wasn't not only affected by its own static force, but also experienced the dynamic process of micro fracture-creep-slip rupture-creep-slip.it provides a new approach for the early warning feasibility of rock landslide in future research.

  15. On the expected discounted penalty functions for two classes of risk processes under a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Lu, Zhaoyang; Xu, Wei; Sun, Decai; Han, Weiguo

    2009-10-01

    In this paper, the discounted penalty (Gerber-Shiu) functions for a risk model involving two independent classes of insurance risks under a threshold dividend strategy are developed. We also assume that the two claim number processes are independent Poisson and generalized Erlang (2) processes, respectively. When the surplus is above this threshold level, dividends are paid at a constant rate that does not exceed the premium rate. Two systems of integro-differential equations for discounted penalty functions are derived, based on whether the surplus is above this threshold level. Laplace transformations of the discounted penalty functions when the surplus is below the threshold level are obtained. And we also derive a system of renewal equations satisfied by the discounted penalty function with initial surplus above the threshold strategy via the Dickson-Hipp operator. Finally, analytical solutions of the two systems of integro-differential equations are presented.

  16. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  17. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  18. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  19. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  20. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  1. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  2. Sedimentary and crustal thicknesses and Poisson's ratios for the NE Tibetan Plateau and its adjacent regions based on dense seismic arrays

    NASA Astrophysics Data System (ADS)

    Wang, Weilai; Wu, Jianping; Fang, Lihua; Lai, Guijuan; Cai, Yan

    2017-03-01

    The sedimentary and crustal thicknesses and Poisson's ratios of the NE Tibetan Plateau and its adjacent regions are estimated by the h- κ stacking and CCP image of receiver functions from the data of 1,317 stations. The horizontal resolution of the obtained results is as high as 0.5° × 0.5°, which can be used for further high resolution model construction in the region. The crustal thicknesses from Airy's equilibrium are smaller than our results in the Sichuan Basin, Qilian tectonic belt, northern Alxa block and Qaidam Basin, which is consistent with the high densities in the mantle lithosphere and may indicate that the high-density lithosphere drags crust down overall. High Poisson's ratios and low velocity zones are found in the mid- and lower crust beneath eastern Qilian tectonic belt and the boundary areas of the Ordos block, indicating that partial melting may exist in these regions. Low Poisson's ratios and low-velocity anomalies are observed in the crust in the NE Tibetan Plateau, implying that the mafic lower crust is thinning or missing and that the mid- and lower crust does not exhibit melting or partial melting in the NE Tibetan Plateau, and weak flow layers are not likely to exist in this region.

  3. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  4. Inhomogeneity of the density of Parascaris spp. eggs in faeces of individual foals and the use of hypothesis testing for treatment decision making.

    PubMed

    Wilkes, E J A; Cowling, A; Woodgate, R G; Hughes, K J

    2016-10-15

    Faecal egg counts (FEC) are used widely for monitoring of parasite infection in animals, treatment decision-making and estimation of anthelmintic efficacy. When a single count or sample mean is used as a point estimate of the expectation of the egg distribution over some time interval, the variability in the egg density is not accounted for. Although variability, including quantifying sources, of egg count data has been described, the spatiotemporal distribution of nematode eggs in faeces is not well understood. We believe that statistical inference about the mean egg count for treatment decision-making has not been used previously. The aim of this study was to examine the density of Parascaris eggs in solution and faeces and to describe the use of hypothesis testing for decision-making. Faeces from two foals with Parascaris burdens were mixed with magnesium sulphate solution and 30 McMaster chambers were examined to determine the egg distribution in a well-mixed solution. To examine the distribution of eggs in faeces from an individual animal, three faecal piles from a foal with a known Parascaris burden were obtained, from which 81 counts were performed. A single faecal sample was also collected daily from 20 foals on three consecutive days and a FEC was performed on three separate portions of each sample. As appropriate, Poisson or negative binomial confidence intervals for the distribution mean were calculated. Parascaris eggs in a well-mixed solution conformed to a homogeneous Poisson process, while the egg density in faeces was not homogeneous, but aggregated. This study provides an extension from homogeneous to inhomogeneous Poisson processes, leading to an understanding of why Poisson and negative binomial distributions correspondingly provide a good fit for egg count data. The application of one-sided hypothesis tests for decision-making is presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Electrodiffusion: a continuum modeling framework for biomolecular systems with realistic spatiotemporal resolution.

    PubMed

    Lu, Benzhuo; Zhou, Y C; Huber, Gary A; Bond, Stephen D; Holst, Michael J; McCammon, J Andrew

    2007-10-07

    A computational framework is presented for the continuum modeling of cellular biomolecular diffusion influenced by electrostatic driving forces. This framework is developed from a combination of state-of-the-art numerical methods, geometric meshing, and computer visualization tools. In particular, a hybrid of (adaptive) finite element and boundary element methods is adopted to solve the Smoluchowski equation (SE), the Poisson equation (PE), and the Poisson-Nernst-Planck equation (PNPE) in order to describe electrodiffusion processes. The finite element method is used because of its flexibility in modeling irregular geometries and complex boundary conditions. The boundary element method is used due to the convenience of treating the singularities in the source charge distribution and its accurate solution to electrostatic problems on molecular boundaries. Nonsteady-state diffusion can be studied using this framework, with the electric field computed using the densities of charged small molecules and mobile ions in the solvent. A solution for mesh generation for biomolecular systems is supplied, which is an essential component for the finite element and boundary element computations. The uncoupled Smoluchowski equation and Poisson-Boltzmann equation are considered as special cases of the PNPE in the numerical algorithm, and therefore can be solved in this framework as well. Two types of computations are reported in the results: stationary PNPE and time-dependent SE or Nernst-Planck equations solutions. A biological application of the first type is the ionic density distribution around a fragment of DNA determined by the equilibrium PNPE. The stationary PNPE with nonzero flux is also studied for a simple model system, and leads to an observation that the interference on electrostatic field of the substrate charges strongly affects the reaction rate coefficient. The second is a time-dependent diffusion process: the consumption of the neurotransmitter acetylcholine by acetylcholinesterase, determined by the SE and a single uncoupled solution of the Poisson-Boltzmann equation. The electrostatic effects, counterion compensation, spatiotemporal distribution, and diffusion-controlled reaction kinetics are analyzed and different methods are compared.

  6. On the Overdispersed Molecular Clock

    PubMed Central

    Takahata, Naoyuki

    1987-01-01

    Rates of molecular evolution at some loci are more irregular than described by simple Poisson processes. Three situations under which molecular evolution would not follow simple Poisson processes are reevaluated from the viewpoint of the neutrality hypothesis: (i) concomitant or multiple substitutions in a gene, (ii) fluctuating substitution rates in time caused by coupled effects of deleterious mutations and bottlenecks, and (iii) changes in the degree of selective constraints against a gene (neutral space) caused by successive substitutions. The common underlying assumption that these causes are lineage nonspecific excludes the case where mutation rates themselves change systematically among lineages or taxonomic groups, and severely limits the extent of variation in the number of substitutions among lineages. Even under this stringent condition, however, the third hypothesis, the fluctuating neutral space model, can generate fairly large variation. This is described by a time-dependent renewal process, which does not exhibit any episodic nature of molecular evolution. It is argued that the observed elevated variances in the number of nucleotide or amino acid substitutions do not immediately call for positive Darwinian selection in molecular evolution. PMID:3596230

  7. Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.

    PubMed

    Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata

    2008-09-01

    A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.

  8. Fedosov’s formal symplectic groupoids and contravariant connections

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2006-10-01

    Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.

  9. An approach to model monitoring and surveillance data of wildlife diseases-exemplified by Classical Swine Fever in wild boar.

    PubMed

    Stahnke, N; Liebscher, V; Staubach, C; Ziller, M

    2013-11-01

    The analysis of epidemiological field data from monitoring and surveillance systems (MOSSs) in wild animals is of great importance in order to evaluate the performance of such systems. By parameter estimation from MOSS data, conclusions about disease dynamics in the observed population can be drawn. To strengthen the analysis, the implementation of a maximum likelihood estimation is the main aim of our work. The new approach presented here is based on an underlying simple SIR (susceptible-infected-recovered) model for a disease scenario in a wildlife population. The three corresponding classes are assumed to govern the intensities (number of animals in the classes) of non-homogeneous Poisson processes. A sampling rate was defined which describes the process of data collection (for MOSSs). Further, the performance of the diagnostics was implemented in the model by a diagnostic matrix containing misclassification rates. Both descriptions of these MOSS parts were included in the Poisson process approach. For simulation studies, the combined model demonstrates its ability to validly estimate epidemiological parameters, such as the basic reproduction rate R0. These parameters will help the evaluation of existing disease control systems. They will also enable comparison with other simulation models. The model has been tested with data from a Classical Swine Fever (CSF) outbreak in wild boars (Sus scrofa scrofa L.) from a region of Germany (1999-2002). The results show that the hunting strategy as a sole control tool is insufficient to decrease the threshold for susceptible animals to eradicate the disease, since the estimated R0 confirms an ongoing epidemic of CSF. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  11. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  12. Frequency-Rank Distributions

    ERIC Educational Resources Information Center

    Brookes, Bertram C.; Griffiths, Jose M.

    1978-01-01

    Frequency, rank, and frequency rank distributions are defined. Extensive discussion on several aspects of frequency rank distributions includes the Poisson process as a means of exploring the stability of ranks; the correlation of frequency rank distributions; and the transfer coefficient, a new measure in frequency rank distribution. (MBR)

  13. A New Model that Generates Lotka's Law.

    ERIC Educational Resources Information Center

    Huber, John C.

    2002-01-01

    Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)

  14. A Method of Poisson's Ration Imaging Within a Material Part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  15. Method of Poisson's ratio imaging within a material part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  16. Effect of Poisson's loss factor of rubbery material on underwater sound absorption of anechoic coatings

    NASA Astrophysics Data System (ADS)

    Zhong, Jie; Zhao, Honggang; Yang, Haibin; Yin, Jianfei; Wen, Jihong

    2018-06-01

    Rubbery coatings embedded with air cavities are commonly used on underwater structures to reduce reflection of incoming sound waves. In this paper, the relationships between Poisson's and modulus loss factors of rubbery materials are theoretically derived, the different effects of the tiny Poisson's loss factor on characterizing the loss factors of shear and longitudinal moduli are revealed. Given complex Young's modulus and dynamic Poisson's ratio, it is found that the shear loss factor has almost invisible variation with the Poisson's loss factor and is very close to the loss factor of Young's modulus, while the longitudinal loss factor almost linearly decreases with the increase of Poisson's loss factor. Then, a finite element (FE) model is used to investigate the effect of the tiny Poisson's loss factor, which is generally neglected in some FE models, on the underwater sound absorption of rubbery coatings. Results show that the tiny Poisson's loss factor has a significant effect on the sound absorption of homogeneous coatings within the concerned frequency range, while it has both frequency- and structure-dependent influence on the sound absorption of inhomogeneous coatings with embedded air cavities. Given the material parameters and cavity dimensions, more obvious effect can be observed for the rubbery coating with a larger lattice constant and/or a thicker cover layer.

  17. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  18. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  19. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGES

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  20. Seismological evidence for a sub-volcanic arc mantle wedge beneath the Denali volcanic gap, Alaska

    USGS Publications Warehouse

    McNamara, D.E.; Pasyanos, M.E.

    2002-01-01

    Arc volcanism in Alaska is strongly correlated with the 100 km depth contour of the western Aluetian Wadati-Benioff zone. Above the eastern portion of the Wadati-Benioff zone however, there is a distinct lack of volcanism (the Denali volcanic gap). We observe high Poisson's ratio values (0.29-0.33) over the entire length of the Alaskan subduction zone mantle wedge based on regional variations of Pn and Sn velocities. High Poisson's ratios at this depth (40-70 km), adjacent to the subducting slab, are attributed to melting of mantle-wedge peridotites, caused by fluids liberated from the subducting oceanic crust and sediments. Observations of high values of Poisson's ratio, beneath the Denali volcanic gap suggest that the mantle wedge contains melted material that is unable to reach the surface. We suggest that its inability to migrate through the overlying crust is due to increased compression in the crust at the northern apex of the curved Denali fault.

  1. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    PubMed Central

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  2. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  3. Simulation based estimation of dynamic mechanical properties for viscoelastic materials used for vocal fold models

    NASA Astrophysics Data System (ADS)

    Rupitsch, Stefan J.; Ilg, Jürgen; Sutor, Alexander; Lerch, Reinhard; Döllinger, Michael

    2011-08-01

    In order to obtain a deeper understanding of the human phonation process and the mechanisms generating sound, realistic setups are built up containing artificial vocal folds. Usually, these vocal folds consist of viscoelastic materials (e.g., polyurethane mixtures). Reliable simulation based studies on the setups require the mechanical properties of the utilized viscoelastic materials. The aim of this work is the identification of mechanical material parameters (Young's modulus, Poisson's ratio, and loss factor) for those materials. Therefore, we suggest a low-cost measurement setup, the so-called vibration transmission analyzer (VTA) enabling to analyze the transfer behavior of viscoelastic materials for propagating mechanical waves. With the aid of a mathematical Inverse Method, the material parameters are adjusted in a convenient way so that the simulation results coincide with the measurement results for the transfer behavior. Contrary to other works, we determine frequency dependent functions for the mechanical properties characterizing the viscoelastic material in the frequency range of human speech (100-250 Hz). The results for three different materials clearly show that the Poisson's ratio is close to 0.5 and that the Young's modulus increases with higher frequencies. For a frequency of 400 Hz, the Young's modulus of the investigated viscoelastic materials is approximately 80% higher than for the static case (0 Hz). We verify the identified mechanical properties with experiments on fabricated vocal fold models. Thereby, only small deviations between measurements and simulations occur.

  4. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less

  5. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  6. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.; Su, W.; Fang, C.

    2014-09-10

    We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt {sup –γ}. The SEEs display a broken power-law WTD. The power-law index is γ{sub 1} = 0.99 for the short waiting times (<70 hr) and γ{sub 2} = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼more » 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ{sup –α}exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt {sup α} {sup –3}, where 0 ≤ α < 2.« less

  7. Crustal structure of the Transantarctic Mountains, Ellsworth Mountains and Marie Byrd Land, Antarctica: constraints on shear wave velocities, Poisson's ratios and Moho depths

    NASA Astrophysics Data System (ADS)

    Ramirez, C.; Nyblade, A.; Emry, E. L.; Julià, J.; Sun, X.; Anandakrishnan, S.; Wiens, D. A.; Aster, R. C.; Huerta, A. D.; Winberry, P.; Wilson, T.

    2017-12-01

    A uniform set of crustal parameters for seismic stations deployed on rock in West Antarctica and the Transantarctic Mountains (TAM) has been obtained to help elucidate similarities and differences in crustal structure within and between several tectonic blocks that make up these regions. P-wave receiver functions have been analysed using the H-κ stacking method to develop estimates of thickness and bulk Poisson's ratio for the crust, and jointly inverted with surface wave dispersion measurements to obtain depth-dependent shear wave velocity models for the crust and uppermost mantle. The results from 33 stations are reported, including three stations for which no previous results were available. The average crustal thickness is 30 ± 5 km along the TAM front, and 38 ± 2 km in the interior of the mountain range. The average Poisson's ratios for these two regions are 0.25 ± 0.03 and 0.26 ± 0.02, respectively, and they have similar average crustal Vs of 3.7 ± 0.1 km s-1. At multiple stations within the TAM, we observe evidence for mafic layering within or at the base of the crust, which may have resulted from the Ferrar magmatic event. The Ellsworth Mountains have an average crustal thickness of 37 ± 2 km, a Poisson's ratio of 0.27, and average crustal Vs of 3.7 ± 0.1 km s-1, similar to the TAM. This similarity is consistent with interpretations of the Ellsworth Mountains as a tectonically rotated TAM block. The Ross Island region has an average Moho depth of 25 ± 1 km, an average crustal Vs of 3.6 ± 0.1 km s-1 and Poisson's ratio of 0.30, consistent with the mafic Cenozoic volcanism found there and its proximity to the Terror Rift. Marie Byrd Land has an average crustal thickness of 30 ± 2 km, Poisson's ratio of 0.25 ± 0.04 and crustal Vs of 3.7 ± 0.1 km s-1. One station (SILY) in Marie Byrd Land is near an area of recent volcanism and deep (25-40 km) seismicity, and has a high Poisson's ratio, consistent with the presence of partial melt in the crust.

  8. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  9. A POISSON PROCESS APPROACH FOR RECURRENT EVENT DATA WITH ENVIRONMENTAL COVARIATES. (R825266)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  10. Modeling Creep Processes in Aging Polymers

    NASA Astrophysics Data System (ADS)

    Olali, N. V.; Voitovich, L. V.; Zazimko, N. N.; Malezhik, M. P.

    2016-03-01

    The photoelastic method is generalized to creep in hereditary aging materials. Optical-creep curves and mechanical-creep or optical-relaxation curves are used to interpret fringe patterns. For materials with constant Poisson's ratio, it is sufficient to use mechanical- or optical-creep curves for this purpose

  11. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2006-11-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  12. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2010-06-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  13. Improvements in continuum modeling for biomolecular systems

    NASA Astrophysics Data System (ADS)

    Yu, Qiao; Ben-Zhuo, Lu

    2016-01-01

    Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson- Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of the biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulations. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and the PNP equations, the coupling of polar and nonpolar interactions, and numerical progress. Project supported by the National Natural Science Foundation of China (Grant No. 91230106) and the Chinese Academy of Sciences Program for Cross & Cooperative Team of the Science & Technology Innovation.

  14. The Nonhomogeneous Poisson Process for Fast Radio Burst Rates

    DOE PAGES

    Lawrence, Earl; Wiel, Scott Vander; Law, Casey; ...

    2017-08-30

    This paper presents the non-homogeneous Poisson process (NHPP) for modeling the rate of fast radio bursts (FRBs) and other infrequently observed astronomical events. The NHPP, well-known in statistics, can model dependence of the rate on both astronomical features and the details of an observing campaign. This is particularly helpful for rare events like FRBs because the NHPP can combine information across surveys, making the most of all available information. The goal of the paper is two-fold. First, it is intended to be a tutorial on the use of the NHPP. Second, we build an NHPP model that incorporates beam patternsmore » and a power law flux distribution for the rate of FRBs. Using information from 12 surveys including 15 detections, we find an all-sky FRB rate of 587 events per sky per day above a flux of 1 Jy (95% CI: 272, 924) and a flux power-law index of 0:91 (95% CI: 0.57, 1.25).« less

  15. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  16. Distribution of apparent activation energy counterparts during thermo - And thermo-oxidative degradation of Aronia melanocarpa (black chokeberry).

    PubMed

    Janković, Bojan; Marinović-Cincović, Milena; Janković, Marija

    2017-09-01

    Kinetics of degradation for Aronia melanocarpa fresh fruits in argon and air atmospheres were investigated. The investigation was based on probability distributions of apparent activation energy of counterparts (ε a ). Isoconversional analysis results indicated that the degradation process in an inert atmosphere was governed by decomposition reactions of esterified compounds. Also, based on same kinetics approach, it was assumed that in an air atmosphere, the primary compound in degradation pathways could be anthocyanins, which undergo rapid chemical reactions. A new model of reactivity demonstrated that, under inert atmospheres, expectation values for ε a occured at levels of statistical probability. These values corresponded to decomposition processes in which polyphenolic compounds might be involved. ε a values obeyed laws of binomial distribution. It was established that, for thermo-oxidative degradation, Poisson distribution represented a very successful approximation for ε a values where there was additional mechanistic complexity and the binomial distribution was no longer valid. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Radiation-induced effects on the mechanical properties of natural ZrSiO4: double cascade-overlap damage accumulation

    NASA Astrophysics Data System (ADS)

    Beirau, Tobias; Nix, William D.; Pöllmann, Herbert; Ewing, Rodney C.

    2018-05-01

    Several different models are known to describe the structure-dependent radiation-induced damage accumulation process in materials (e.g. Gibbons Proc IEEE 60:1062-1096, 1972; Weber Nuc Instr Met Phys Res B 166-167:98-106, 2000). In the literature, two different models of damage accumulation due to α-decay events in natural ZrSiO4 (zircon) have been described. The direct impact damage accumulation model is based on amorphization occurring directly within the collision cascade. However, the double cascade-overlap damage accumulation model predicts that amorphization will only occur due to the overlap of disordered domains within the cascade. By analyzing the dose-dependent evolution of mechanical properties (i.e., Poisson's ratios, compliance constants, elastic modulus, and hardness) as a measure of the increasing amorphization, we provide support for the double cascade-overlap damage accumulation model. We found no evidence to support the direct impact damage accumulation model. Additionally, the amount of radiation damage could be related to an anisotropic-to-isotropic transition of the Poisson's ratio for stress along and perpendicular to the four-fold c-axis and of the related compliance constants of natural U- and Th-bearing zircon. The isotropification occurs in the dose range between 3.1 × and 6.3 × 1018 α-decays/g.

  18. Anisotropic mechanical properties of zircon and the effect of radiation damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beirau, Tobias; Nix, William D.; Bismayer, Ulrich

    2016-06-02

    Our study provides new insights into the relationship between radiation-dose-dependent structural damage, due to natural U and Th impurities, and the anisotropic mechanical properties (Poisson s ratio, elastic modulus and hardness) of zircon. Natural zircon samples from Sri Lanka (see Muarakami et al. 1991) and synthetic samples, covering a dose range of zero up to 6.8 x 10 18 -decays/g, have been studied by nanoindentation. Measurements along the [100] crystallographic direction and calculations, based on elastic stiffness constants determined by zkan (1976), revealed a general radiation-induced decrease in stiffness (~ 54 %) and hardness (~ 48 %) and an increasemore » of the Poisson s ratio (~ 54 %) with increasing dose. Additional indentations on selected samples along the [001] allowed one to follow the amorphization process to the point that the mechanical properties are isotropic. This work shows that the radiation-dose-dependent changes of the mechanical properties of zircon can be directly correlated with the amorphous fraction as determined by previous investigations with local and global probes (Rios et al. 2000a; Farnan and Salje 2001; Zhang and Salje 2001). This agreement, revealed by the different methods, indicates a huge influence of structural and even local phenomena on the macroscopic mechanical properties.« less

  19. Radiation-induced effects on the mechanical properties of natural ZrSiO4: double cascade-overlap damage accumulation

    NASA Astrophysics Data System (ADS)

    Beirau, Tobias; Nix, William D.; Pöllmann, Herbert; Ewing, Rodney C.

    2017-11-01

    Several different models are known to describe the structure-dependent radiation-induced damage accumulation process in materials (e.g. Gibbons Proc IEEE 60:1062-1096, 1972; Weber Nuc Instr Met Phys Res B 166-167:98-106, 2000). In the literature, two different models of damage accumulation due to α-decay events in natural ZrSiO4 (zircon) have been described. The direct impact damage accumulation model is based on amorphization occurring directly within the collision cascade. However, the double cascade-overlap damage accumulation model predicts that amorphization will only occur due to the overlap of disordered domains within the cascade. By analyzing the dose-dependent evolution of mechanical properties (i.e., Poisson's ratios, compliance constants, elastic modulus, and hardness) as a measure of the increasing amorphization, we provide support for the double cascade-overlap damage accumulation model. We found no evidence to support the direct impact damage accumulation model. Additionally, the amount of radiation damage could be related to an anisotropic-to-isotropic transition of the Poisson's ratio for stress along and perpendicular to the four-fold c-axis and of the related compliance constants of natural U- and Th-bearing zircon. The isotropification occurs in the dose range between 3.1 × and 6.3 × 1018 α-decays/g.

  20. A flexible count data model to fit the wide diversity of expression profiles arising from extensively replicated RNA-seq experiments

    PubMed Central

    2013-01-01

    Background High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.org. PMID:23965047

Top