Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Scaling the Poisson Distribution
ERIC Educational Resources Information Center
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
Poisson-Like Spiking in Circuits with Probabilistic Synapses
Moreno-Bote, Rubén
2014-01-01
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby
2006-01-01
This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…
Wen, L; Bowen, C R; Hartman, G L
2017-10-01
Dispersal of urediniospores by wind is the primary means of spread for Phakopsora pachyrhizi, the cause of soybean rust. Our research focused on the short-distance movement of urediniospores from within the soybean canopy and up to 61 m from field-grown rust-infected soybean plants. Environmental variables were used to develop and compare models including the least absolute shrinkage and selection operator regression, zero-inflated Poisson/regular Poisson regression, random forest, and neural network to describe deposition of urediniospores collected in passive and active traps. All four models identified distance of trap from source, humidity, temperature, wind direction, and wind speed as the five most important variables influencing short-distance movement of urediniospores. The random forest model provided the best predictions, explaining 76.1 and 86.8% of the total variation in the passive- and active-trap datasets, respectively. The prediction accuracy based on the correlation coefficient (r) between predicted values and the true values were 0.83 (P < 0.0001) and 0.94 (P < 0.0001) for the passive and active trap datasets, respectively. Overall, multiple machine learning techniques identified the most important variables to make the most accurate predictions of movement of P. pachyrhizi urediniospores short-distance.
Low-contrast lesion detection in tomosynthetic breast imaging using a realistic breast phantom
NASA Astrophysics Data System (ADS)
Zhou, Lili; Oldan, Jorge; Fisher, Paul; Gindi, Gene
2006-03-01
Tomosynthesis mammography is a potentially valuable technique for detection of breast cancer. In this simulation study, we investigate the efficacy of three different tomographic reconstruction methods, EM, SART and Backprojection, in the context of an especially difficult mammographic detection task. The task is the detection of a very low-contrast mass embedded in very dense fibro-glandular tissue - a clinically useful task for which tomosynthesis may be well suited. The project uses an anatomically realistic 3D digital breast phantom whose normal anatomic variability limits lesion conspicuity. In order to capture anatomical object variability, we generate an ensemble of phantoms, each of which comprises random instances of various breast structures. We construct medium-sized 3D breast phantoms which model random instances of ductal structures, fibrous connective tissue, Cooper's ligaments and power law structural noise for small scale object variability. Random instances of 7-8 mm irregular masses are generated by a 3D random walk algorithm and placed in very dense fibro-glandular tissue. Several other components of the breast phantom are held fixed, i.e. not randomly generated. These include the fixed breast shape and size, nipple structure, fixed lesion location, and a pectoralis muscle. We collect low-dose data using an isocentric tomosynthetic geometry at 11 angles over 50 degrees and add Poisson noise. The data is reconstructed using the three algorithms. Reconstructed slices through the center of the lesion are presented to human observers in a 2AFC (two-alternative-forced-choice) test that measures detectability by computing AUC (area under the ROC curve). The data collected in each simulation includes two sources of variability, that due to the anatomical variability of the phantom and that due to the Poisson data noise. We found that for this difficult task that the AUC value for EM (0.89) was greater than that for SART (0.83) and Backprojection (0.66).
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Graphic Simulations of the Poisson Process.
1982-10-01
RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value
Research in Stochastic Processes.
1983-10-01
increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
NASA Astrophysics Data System (ADS)
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2018-04-01
Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.
Lin, Yi Hung; Tu, Yu Kang; Lu, Chun Tai; Chung, Wen Chen; Huang, Chiung Fang; Huang, Mao Suan; Lu, Hsein Kun
2014-01-01
Repigmentation variably occurs with different treatment methods in patients with gingival pigmentation. A systemic review was conducted of various treatment modalities for eliminating melanin pigmentation of the gingiva, comprising bur abrasion, scalpel surgery, cryosurgery, electrosurgery, gingival grafts, and laser techniques, to compare the recurrence rates (Rrs) of these treatment procedures. Electronic databases, including PubMed, Web of Science, Google, and Medline were comprehensively searched, and manual searches were conducted for studies published from January 1951 to June 2013. After applying inclusion and exclusion criteria, the final list of articles was reviewed in depth to achieve the objectives of this review. A Poisson regression was used to analyze the outcome of depigmentation using the various treatment methods. The systematic review was based on case reports mainly. In total, 61 eligible publications met the defined criteria. The various therapeutic procedures showed variable clinical results with a wide range of Rrs. A random-effects Poisson regression showed that cryosurgery (Rr = 0.32%), electrosurgery (Rr = 0.74%), and laser depigmentation (Rr = 1.16%) yielded superior result, whereas bur abrasion yielded the highest Rr (8.89%). Within the limit of the sampling level, the present evidence-based results show that cryosurgery exhibits the optimal predictability for depigmentation of the gingiva among all procedures examined, followed by electrosurgery and laser techniques. It is possible to treat melanin pigmentation of the gingiva with various methods and prevent repigmentation. Among those treatment modalities, cryosurgery, electrosurgery, and laser surgery appear to be the best choices for treating gingival pigmentation. © 2014 Wiley Periodicals, Inc.
A poisson process model for hip fracture risk.
Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S
2010-08-01
The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
The impact of short term synaptic depression and stochastic vesicle dynamics on neuronal variability
Reich, Steven
2014-01-01
Neuronal variability plays a central role in neural coding and impacts the dynamics of neuronal networks. Unreliability of synaptic transmission is a major source of neural variability: synaptic neurotransmitter vesicles are released probabilistically in response to presynaptic action potentials and are recovered stochastically in time. The dynamics of this process of vesicle release and recovery interacts with variability in the arrival times of presynaptic spikes to shape the variability of the postsynaptic response. We use continuous time Markov chain methods to analyze a model of short term synaptic depression with stochastic vesicle dynamics coupled with three different models of presynaptic spiking: one model in which the timing of presynaptic action potentials are modeled as a Poisson process, one in which action potentials occur more regularly than a Poisson process (sub-Poisson) and one in which action potentials occur more irregularly (super-Poisson). We use this analysis to investigate how variability in a presynaptic spike train is transformed by short term depression and stochastic vesicle dynamics to determine the variability of the postsynaptic response. We find that sub-Poisson presynaptic spiking increases the average rate at which vesicles are released, that the number of vesicles released over a time window is more variable for smaller time windows than larger time windows and that fast presynaptic spiking gives rise to Poisson-like variability of the postsynaptic response even when presynaptic spike times are non-Poisson. Our results complement and extend previously reported theoretical results and provide possible explanations for some trends observed in recorded data. PMID:23354693
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
NASA Astrophysics Data System (ADS)
Tatlier, Mehmet Seha
Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.
Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...
STDP allows fast rate-modulated coding with Poisson-like spike trains.
Gilson, Matthieu; Masquelier, Timothée; Hugues, Etienne
2011-10-01
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (~10-20 ms) for sufficiently many inputs (~100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
STDP Allows Fast Rate-Modulated Coding with Poisson-Like Spike Trains
Hugues, Etienne
2011-01-01
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks. PMID:22046113
NASA Astrophysics Data System (ADS)
Vianello, Giacomo
2018-05-01
Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (“on” measurement) is contrasted with a background-only observation free of the effect (“off” measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables. In this paper we study three other cases: (i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, (ii) the case where the background estimate b in the off measurement has an additional systematic uncertainty, and (iii) the case where b is a Gaussian random variable instead of a Poisson random variable. The latter case applies when b comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use a formula that is only valid when b is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short gamma-ray bursts and of new X-ray or γ-ray sources. All the techniques presented in this paper are made available in a Python code that is ready to use.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des
2007-09-01
Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.
Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions
ERIC Educational Resources Information Center
Desjardins, Christopher David
2016-01-01
The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
Yen, A M-F; Liou, H-H; Lin, H-L; Chen, T H-H
2006-01-01
The study aimed to develop a predictive model to deal with data fraught with heterogeneity that cannot be explained by sampling variation or measured covariates. The random-effect Poisson regression model was first proposed to deal with over-dispersion for data fraught with heterogeneity after making allowance for measured covariates. Bayesian acyclic graphic model in conjunction with Markov Chain Monte Carlo (MCMC) technique was then applied to estimate the parameters of both relevant covariates and random effect. Predictive distribution was then generated to compare the predicted with the observed for the Bayesian model with and without random effect. Data from repeated measurement of episodes among 44 patients with intractable epilepsy were used as an illustration. The application of Poisson regression without taking heterogeneity into account to epilepsy data yielded a large value of heterogeneity (heterogeneity factor = 17.90, deviance = 1485, degree of freedom (df) = 83). After taking the random effect into account, the value of heterogeneity factor was greatly reduced (heterogeneity factor = 0.52, deviance = 42.5, df = 81). The Pearson chi2 for the comparison between the expected seizure frequencies and the observed ones at two and three months of the model with and without random effect were 34.27 (p = 1.00) and 1799.90 (p < 0.0001), respectively. The Bayesian acyclic model using the MCMC method was demonstrated to have great potential for disease prediction while data show over-dispersion attributed either to correlated property or to subject-to-subject variability.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
2015-12-24
but the fundamental approach remains unchanged. We consider the case of a sports memorabilia shop whose owner is an avid personal collector of baseball...collector’s competition 15 days from now. Between now and then, as customers bring in antique baseball cards, he must decide which ones to purchase for his...purchased from the shop each day is a random variable that is Poisson distributed with λout = 2. • 20% of cards are 5.25 in2, 10% are 9.97 in2, and 70% are
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
On the fractal characterization of Paretian Poisson processes
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Sokolov, Igor M.
2012-06-01
Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.
Prescription-induced jump distributions in multiplicative Poisson processes.
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Prescription-induced jump distributions in multiplicative Poisson processes
NASA Astrophysics Data System (ADS)
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Poisson's ratio of fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Christiansson, Henrik; Helsing, Johan
1996-05-01
Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.
Fedosov’s formal symplectic groupoids and contravariant connections
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2006-10-01
Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.
Time distributions of solar energetic particle events: Are SEPEs really random?
NASA Astrophysics Data System (ADS)
Jiggens, P. T. A.; Gabriel, S. B.
2009-10-01
Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.
Normal forms of dispersive scalar Poisson brackets with two independent variables
NASA Astrophysics Data System (ADS)
Carlet, Guido; Casati, Matteo; Shadrin, Sergey
2018-03-01
We classify the dispersive Poisson brackets with one dependent variable and two independent variables, with leading order of hydrodynamic type, up to Miura transformations. We show that, in contrast to the case of a single independent variable for which a well-known triviality result exists, the Miura equivalence classes are parametrised by an infinite number of constants, which we call numerical invariants of the brackets. We obtain explicit formulas for the first few numerical invariants.
Seasonally adjusted birth frequencies follow the Poisson distribution.
Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A
2015-12-15
Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p < 0.01). The fundamental model with year and month as explanatory variables is significantly improved (p < 0.001) by adding day of the week as an explanatory variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.
Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan
2017-01-01
Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Estimating safety effects of pavement management factors utilizing Bayesian random effect models.
Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong
2013-01-01
Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic-related crashes. Hence, maintaining a low level of pavement roughness is strongly suggested. In addition, the results suggested that the temporal correlation among observations was significant and that the ORENB model outperformed all other models.
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Impact of the Fano Factor on Position and Energy Estimation in Scintillation Detectors.
Bora, Vaibhav; Barrett, Harrison H; Jha, Abhinav K; Clarkson, Eric
2015-02-01
The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.
Statistical mapping of count survey data
Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.
2002-01-01
We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.
Poisson Regression Analysis of Illness and Injury Surveillance Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson variation. The R open source software environment for statistical computing and graphics is used for analysis. Additional details about R and the data that were used in this report are provided in an Appendix. Information on how to obtain R and utility functions that can be used to duplicate results in this report are provided.« less
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Spatial distribution of psychotic disorders in an urban area of France: an ecological study.
Pignon, Baptiste; Schürhoff, Franck; Baudin, Grégoire; Ferchiou, Aziz; Richard, Jean-Romain; Saba, Ghassen; Leboyer, Marion; Kirkbride, James B; Szöke, Andrei
2016-05-18
Previous analyses of neighbourhood variations of non-affective psychotic disorders (NAPD) have focused mainly on incidence. However, prevalence studies provide important insights on factors associated with disease evolution as well as for healthcare resource allocation. This study aimed to investigate the distribution of prevalent NAPD cases in an urban area in France. The number of cases in each neighbourhood was modelled as a function of potential confounders and ecological variables, namely: migrant density, economic deprivation and social fragmentation. This was modelled using statistical models of increasing complexity: frequentist models (using Poisson and negative binomial regressions), and several Bayesian models. For each model, assumptions validity were checked and compared as to how this fitted to the data, in order to test for possible spatial variation in prevalence. Data showed significant overdispersion (invalidating the Poisson regression model) and residual autocorrelation (suggesting the need to use Bayesian models). The best Bayesian model was Leroux's model (i.e. a model with both strong correlation between neighbouring areas and weaker correlation between areas further apart), with economic deprivation as an explanatory variable (OR = 1.13, 95% CI [1.02-1.25]). In comparison with frequentist methods, the Bayesian model showed a better fit. The number of cases showed non-random spatial distribution and was linked to economic deprivation.
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D
2011-04-01
Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.
Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2010-06-01
Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Tayyib, Nahla; Coyer, Fiona; Lewis, Peter A
2015-05-01
This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). A two-arm cluster randomized experimental control trial. Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = .002) and Stage II PU development (p = .026). Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome. © 2015 Sigma Theta Tau International.
Simulation of diffuse-charge capacitance in electric double layer capacitors
NASA Astrophysics Data System (ADS)
Sun, Ning; Gersappe, Dilip
2017-01-01
We use a Lattice Boltzmann Model (LBM) in order to simulate diffuse-charge dynamics in Electric Double Layer Capacitors (EDLCs). Simulations are carried out for both the charge and the discharge processes on 2D systems of complex random electrode geometries (pure random, random spheres and random fibers). The steric effect of concentrated solutions is considered by using a Modified Poisson-Nernst-Planck (MPNP) equations and compared with regular Poisson-Nernst-Planck (PNP) systems. The effects of electrode microstructures (electrode density, electrode filler morphology, filler size, etc.) on the net charge distribution and charge/discharge time are studied in detail. The influence of applied potential during discharging process is also discussed. Our studies show how electrode morphology can be used to tailor the properties of supercapacitors.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Super-stable Poissonian structures
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2012-10-01
In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.
Estimating random errors due to shot noise in backscatter lidar observations.
Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang
2006-06-20
We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.
Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations
NASA Technical Reports Server (NTRS)
Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang
2006-01-01
In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.
Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach
Mohammadi, Tayeb; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
The distribution of catchment coverage by stationary rainstorms
NASA Technical Reports Server (NTRS)
Eagleson, P. S.
1984-01-01
The occurrence of wetted rainstorm area within a catchment is modeled as a Poisson arrival process in which each storm is composed of stationary, nonoverlapping, independent random cell clusters whose centers are Poisson-distributed in space and whose areas are fractals. The two Poisson parameters and hence the first two moments of the wetted fraction are derived in terms of catchment average characteristics of the (observable) station precipitation. The model is used to estimate spatial properties of tropical air mass thunderstorms on six tropical catchments in the Sudan.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Infinitesimal Deformations of a Formal Symplectic Groupoid
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2011-09-01
Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments
ERIC Educational Resources Information Center
Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.
2008-01-01
Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…
On Models for Binomial Data with Random Numbers of Trials
Comulada, W. Scott; Weiss, Robert E.
2010-01-01
Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
A Generalized QMRA Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2016-10-01
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0
Hierarchical Bayesian Modeling of Fluid-Induced Seismicity
NASA Astrophysics Data System (ADS)
Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.
2017-11-01
In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.
Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading
NASA Technical Reports Server (NTRS)
Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.
1998-01-01
The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.
A random-censoring Poisson model for underreported data.
de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins
2017-12-30
A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar
NASA Technical Reports Server (NTRS)
Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.
NASA Astrophysics Data System (ADS)
Xie, Dexuan; Jiang, Yi
2018-05-01
This paper reports a nonuniform ionic size nonlocal Poisson-Fermi double-layer model (nuNPF) and a uniform ionic size nonlocal Poisson-Fermi double-layer model (uNPF) for an electrolyte mixture of multiple ionic species, variable voltages on electrodes, and variable induced charges on boundary segments. The finite element solvers of nuNPF and uNPF are developed and applied to typical double-layer tests defined on a rectangular box, a hollow sphere, and a hollow rectangle with a charged post. Numerical results show that nuNPF can significantly improve the quality of the ionic concentrations and electric fields generated from uNPF, implying that the effect of nonuniform ion sizes is a key consideration in modeling the double-layer structure.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
NASA Astrophysics Data System (ADS)
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies
DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.
2013-01-01
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
Gene regulation and noise reduction by coupling of stochastic processes
NASA Astrophysics Data System (ADS)
Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John
2015-02-01
Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.
Gene regulation and noise reduction by coupling of stochastic processes
Hornos, José Eduardo M.; Reinitz, John
2015-01-01
Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447
Gene regulation and noise reduction by coupling of stochastic processes.
Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John
2015-02-01
Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.
Waiting-time distributions of magnetic discontinuities: clustering or Poisson process?
Greco, A; Matthaeus, W H; Servidio, S; Dmitruk, P
2009-10-01
Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.
ERIC Educational Resources Information Center
Wilde, Carroll O.
The Poisson probability distribution is seen to provide a mathematical model from which useful information can be obtained in practical applications. The distribution and some situations to which it applies are studied, and ways to find answers to practical questions are noted. The unit includes exercises and a model exam, and provides answers to…
Waiting-time distributions of magnetic discontinuities: Clustering or Poisson process?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greco, A.; Matthaeus, W. H.; Servidio, S.
2009-10-15
Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
Stochastic analysis of three-dimensional flow in a bounded domain
Naff, R.L.; Vecchia, A.V.
1986-01-01
A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.
Application of the Hotelling and ideal observers to detection and localization of exoplanets.
Caucci, Luca; Barrett, Harrison H; Devaney, Nicholas; Rodríguez, Jeffrey J
2007-12-01
The ideal linear discriminant or Hotelling observer is widely used for detection tasks and image-quality assessment in medical imaging, but it has had little application in other imaging fields. We apply it to detection of planets outside of our solar system with long-exposure images obtained from ground-based or space-based telescopes. The statistical limitations in this problem include Poisson noise arising mainly from the host star, electronic noise in the image detector, randomness or uncertainty in the point-spread function (PSF) of the telescope, and possibly a random background. PSF randomness is reduced but not eliminated by the use of adaptive optics. We concentrate here on the effects of Poisson and electronic noise, but we also show how to extend the calculation to include a random PSF. For the case where the PSF is known exactly, we compare the Hotelling observer to other observers commonly used for planet detection; comparison is based on receiver operating characteristic (ROC) and localization ROC (LROC) curves.
Application of the Hotelling and ideal observers to detection and localization of exoplanets
Caucci, Luca; Barrett, Harrison H.; Devaney, Nicholas; Rodríguez, Jeffrey J.
2008-01-01
The ideal linear discriminant or Hotelling observer is widely used for detection tasks and image-quality assessment in medical imaging, but it has had little application in other imaging fields. We apply it to detection of planets outside of our solar system with long-exposure images obtained from ground-based or space-based telescopes. The statistical limitations in this problem include Poisson noise arising mainly from the host star, electronic noise in the image detector, randomness or uncertainty in the point-spread function (PSF) of the telescope, and possibly a random background. PSF randomness is reduced but not eliminated by the use of adaptive optics. We concentrate here on the effects of Poisson and electronic noise, but we also show how to extend the calculation to include a random PSF. For the case where the PSF is known exactly, we compare the Hotelling observer to other observers commonly used for planet detection; comparison is based on receiver operating characteristic (ROC) and localization ROC (LROC) curves. PMID:18059905
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
Effect of motivational interviewing on rates of early childhood caries: a randomized trial.
Harrison, Rosamund; Benton, Tonya; Everson-Stewart, Siobhan; Weinstein, Phil
2007-01-01
The purposes of this randomized controlled trial were to: (1) test motivational interviewing (MI) to prevent early childhood caries; and (2) use Poisson regression for data analysis. A total of 240 South Asian children 6 to 18 months old were enrolled and randomly assigned to either the MI or control condition. Children had a dental exam, and their mothers completed pretested instruments at baseline and 1 and 2 years postintervention. Other covariates that might explain outcomes over and above treatment differences were modeled using Poisson regression. Hazard ratios were produced. Analyses included all participants whenever possible. Poisson regression supported a protective effect of MI (hazard ratio [HR]=0.54 (95%CI=035-0.84)-that is, the M/ group had about a 46% lower rate of dmfs at 2 years than did control children. Similar treatment effect estimates were obtained from models that included, as alternative outcomes, ds, dms, and dmfs, including "white spot lesions." Exploratory analyses revealed that rates of dmfs were higher in children whose mothers had: (1) prechewed their food; (2) been raised in a rural environment; and (3) a higher family income (P<.05). A motivational interviewing-style intervention shows promise to promote preventive behaviors in mothers of young children at high risk for caries.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Hosseinpour, Mehdi; Pour, Mehdi Hossein; Prasetijo, Joewono; Yahaya, Ahmad Shukri; Ghadiri, Seyed Mohammad Reza
2013-01-01
The objective of this study was to examine the effects of various roadway characteristics on the incidence of pedestrian-vehicle crashes by developing a set of crash prediction models on 543 km of Malaysia federal roads over a 4-year time span between 2007 and 2010. Four count models including the Poisson, negative binomial (NB), hurdle Poisson (HP), and hurdle negative binomial (HNB) models were developed and compared to model the number of pedestrian crashes. The results indicated the presence of overdispersion in the pedestrian crashes (PCs) and showed that it is due to excess zero rather than variability in the crash data. To handle the issue, the hurdle Poisson model was found to be the best model among the considered models in terms of comparative measures. Moreover, the variables average daily traffic, heavy vehicle traffic, speed limit, land use, and area type were significantly associated with PCs.
NASA Astrophysics Data System (ADS)
Moreto, Jose; Liu, Xiaofeng
2017-11-01
The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Approximations to camera sensor noise
NASA Astrophysics Data System (ADS)
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
Random matrices and the New York City subway system
NASA Astrophysics Data System (ADS)
Jagannath, Aukosh; Trogdon, Thomas
2017-09-01
We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito
NASA Astrophysics Data System (ADS)
Muñoz, Miguel
1983-09-01
The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.
Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?
MacKenzie, Donald W
2013-01-01
Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.
Doubly stochastic Poisson process models for precipitation at fine time-scales
NASA Astrophysics Data System (ADS)
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
NASA Astrophysics Data System (ADS)
Privault, Nicolas
2016-05-01
We construct differential forms of all orders and a covariant derivative together with its adjoint on the probability space of a standard Poisson process, using derivation operators. In this framewok we derive a de Rham-Hodge-Kodaira decomposition as well as Weitzenböck and Clark-Ocone formulas for random differential forms. As in the Wiener space setting, this construction provides two distinct approaches to the vanishing of harmonic differential forms.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
Barrett, Harrison H; Myers, Kyle J; Caucci, Luca
2014-08-17
A fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular and wavelength variables. The mean of this random process is the spectral radiance. The principle of conservation of radiance then allows a full characterization of the noise in the image (conditional on viewing a specified object). To elucidate these connections, we first review the definitions and basic properties of radiance as defined in terms of geometrical optics, radiology, physical optics and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Then we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors. The relation between the radiance and the statistical properties of the detector output is discussed and related to task-based measures of image quality and the information content of a single detected photon.
Barrett, Harrison H.; Myers, Kyle J.; Caucci, Luca
2016-01-01
A fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular and wavelength variables. The mean of this random process is the spectral radiance. The principle of conservation of radiance then allows a full characterization of the noise in the image (conditional on viewing a specified object). To elucidate these connections, we first review the definitions and basic properties of radiance as defined in terms of geometrical optics, radiology, physical optics and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Then we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors. The relation between the radiance and the statistical properties of the detector output is discussed and related to task-based measures of image quality and the information content of a single detected photon. PMID:27478293
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
Dynamics of a prey-predator system under Poisson white noise excitation
NASA Astrophysics Data System (ADS)
Pan, Shan-Shan; Zhu, Wei-Qiu
2014-10-01
The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.
Filling of a Poisson trap by a population of random intermittent searchers.
Bressloff, Paul C; Newby, Jay M
2012-03-01
We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Fractional Brownian motion and long term clinical trial recruitment
Zhang, Qiang; Lai, Dejian
2015-01-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306
Fractional Brownian motion and long term clinical trial recruitment.
Zhang, Qiang; Lai, Dejian
2011-05-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Atomic clocks and the continuous-time random-walk
NASA Astrophysics Data System (ADS)
Formichella, Valerio; Camparo, James; Tavella, Patrizia
2017-11-01
Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.
Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.
Anisimov, Vladimir V
2011-01-01
This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Nottorf, Eric W.
1989-01-01
A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Spatial variation of natural radiation and childhood leukaemia incidence in Great Britain.
Richardson, S; Monfort, C; Green, M; Draper, G; Muirhead, C
This paper describes an analysis of the geographical variation of childhood leukaemia incidence in Great Britain over a 15 year period in relation to natural radiation (gamma and radon). Data at the level of the 459 district level local authorities in England, Wales and regional districts in Scotland are analysed in two complementary ways: first, by Poisson regressions with the inclusion of environmental covariates and a smooth spatial structure; secondly, by a hierarchical Bayesian model in which extra-Poisson variability is modelled explicitly in terms of spatial and non-spatial components. From this analysis, we deduce a strong indication that a main part of the variability is accounted for by a local neighbourhood 'clustering' structure. This structure is furthermore relatively stable over the 15 year period for the lymphocytic leukaemias which make up the majority of observed cases. We found no evidence of a positive association of childhood leukaemia incidence with outdoor or indoor gamma radiation levels. There is no consistent evidence of any association with radon levels. Indeed, in the Poisson regressions, a significant positive association was only observed for one 5-year period, a result which is not compatible with a stable environmental effect. Moreover, this positive association became clearly non-significant when over-dispersion relative to the Poisson distribution was taken into account.
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
Radio pulsar glitches as a state-dependent Poisson process
NASA Astrophysics Data System (ADS)
Fulgenzi, W.; Melatos, A.; Hughes, B. D.
2017-10-01
Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.
A Bayesian ridge regression analysis of congestion's impact on urban expressway safety.
Shi, Qi; Abdel-Aty, Mohamed; Lee, Jaeyoung
2016-03-01
With the rapid growth of traffic in urban areas, concerns about congestion and traffic safety have been heightened. This study leveraged both Automatic Vehicle Identification (AVI) system and Microwave Vehicle Detection System (MVDS) installed on an expressway in Central Florida to explore how congestion impacts the crash occurrence in urban areas. Multiple congestion measures from the two systems were developed. To ensure more precise estimates of the congestion's effects, the traffic data were aggregated into peak and non-peak hours. Multicollinearity among traffic parameters was examined. The results showed the presence of multicollinearity especially during peak hours. As a response, ridge regression was introduced to cope with this issue. Poisson models with uncorrelated random effects, correlated random effects, and both correlated random effects and random parameters were constructed within the Bayesian framework. It was proven that correlated random effects could significantly enhance model performance. The random parameters model has similar goodness-of-fit compared with the model with only correlated random effects. However, by accounting for the unobserved heterogeneity, more variables were found to be significantly related to crash frequency. The models indicated that congestion increased crash frequency during peak hours while during non-peak hours it was not a major crash contributing factor. Using the random parameter model, the three congestion measures were compared. It was found that all congestion indicators had similar effects while Congestion Index (CI) derived from MVDS data was a better congestion indicator for safety analysis. Also, analyses showed that the segments with higher congestion intensity could not only increase property damage only (PDO) crashes, but also more severe crashes. In addition, the issues regarding the necessity to incorporate specific congestion indicator for congestion's effects on safety and to take care of the multicollinearity between explanatory variables were also discussed. By including a specific congestion indicator, the model performance significantly improved. When comparing models with and without ridge regression, the magnitude of the coefficients was altered in the existence of multicollinearity. These conclusions suggest that the use of appropriate congestion measure and consideration of multicolilnearity among the variables would improve the models and our understanding about the effects of congestion on traffic safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
Elastic properties of overpressured and unconsolidated sediments
Lee, Myung W.
2003-01-01
Differential pressure affects elastic velocities and Poisson?s ratio of sediments in such a way that velocities increase as differential pressure increases. Overpressured zones in sediments can be detected by observing an increase in Poisson?s ratio with a corresponding drop in elastic velocities. In highly overpressured sands, such as shallow water flow sands, the P-to S-wave velocity ratio (Vp/Vs) is very high, on the order of 10 or higher, due to the unconsolidated and uncemented nature of sediments. In order to predict elastic characteristics of highly overpressured sands, Biot-Gassmann theory by Lee (BGTL) is used with a variable exponent n that depends on differential pressure and the degree of consolidation/compaction. The exponent n decreases as differential pressure and the degree of consolidation increases, and, as n decreases, velocity increases and Vp/Vs decreases. The predicted velocity ratio by BGTL agrees well with the measured velocity ratio at low differential pressure for unconsolidated sediments.
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Fractional poisson--a simple dose-response model for human norovirus.
Messner, Michael J; Berger, Philip; Nappier, Sharon P
2014-10-01
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
Preference heterogeneity in a count data model of demand for off-highway vehicle recreation
Thomas P Holmes; Jeffrey E Englin
2010-01-01
This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...
Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.
Maldonado, G; Greenland, S
1998-07-01
A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory
NASA Astrophysics Data System (ADS)
Heittmann, Arne; Noll, Tobias G.
2013-07-01
A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Troyer, T W; Miller, K D
1997-07-01
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Dual Roles for Spike Signaling in Cortical Neural Populations
Ballard, Dana H.; Jehee, Janneke F. M.
2011-01-01
A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo
2009-01-01
Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution on the health of the population. PMID:19159450
NASA Astrophysics Data System (ADS)
Octavianty, Toharudin, Toni; Jaya, I. G. N. Mindra
2017-03-01
Tuberculosis (TB) is a disease caused by a bacterium, called Mycobacterium tuberculosis, which typically attacks the lungs but can also affect the kidney, spine, and brain (Centers for Disease Control and Prevention). Indonesia had the largest number of TB cases after India (Global Tuberculosis Report 2015 by WHO). The distribution of Mycobacterium tuberculosis genotypes in Indonesia showed the high genetic diversity and tended to vary by geographic regions. For instance, in Bandung city, the prevalence rate of TB morbidity is quite high. A number of TB patients belong to the counted data. To determine the factors that significantly influence the number of tuberculosis patients in each location of the observations can be used statistical analysis tool that is Geographically Weighted Poisson Regression Semiparametric (GWPRS). GWPRS is an extension of the Poisson regression and GWPR that is influenced by geographical factors, and there is also variables that influence globally and locally. Using the TB Data in Bandung city (in 2015), the results show that the global and local variables that influence the number of tuberculosis patients in every sub-district.
The Poisson model limits in NBA basketball: Complexity in team sports
NASA Astrophysics Data System (ADS)
Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa
2016-12-01
Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.
A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits
Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.
2012-01-01
Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.
Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi
2016-10-01
Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Predicting rates of inbreeding in populations undergoing selection.
Woolliams, J A; Bijma, P
2000-01-01
Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074
The non-equilibrium allele frequency spectrum in a Poisson random field framework.
Kaj, Ingemar; Mugal, Carina F
2016-10-01
In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics. Copyright © 2016 Elsevier Inc. All rights reserved.
MODEL FOR INSTANTANEOUS RESIDENTIAL WATER DEMANDS
Residential wateer use is visualized as a customer-server interaction often encountered in queueing theory. Individual customers are assumed to arrive according to a nonhomogeneous Poisson process, then engage water servers for random lengths of time. Busy servers are assumed t...
Wu, Yingpeng; Yi, Ningbo; Huang, Lu; Zhang, Tengfei; Fang, Shaoli; Chang, Huicong; Li, Na; Oh, Jiyoung; Lee, Jae Ah; Kozlov, Mikhail; Chipara, Alin C; Terrones, Humberto; Xiao, Peishuang; Long, Guankui; Huang, Yi; Zhang, Fan; Zhang, Long; Lepró, Xavier; Haines, Carter; Lima, Márcio Dias; Lopez, Nestor Perea; Rajukumar, Lakshmy P; Elias, Ana L; Feng, Simin; Kim, Seon Jeong; Narayanan, N T; Ajayan, Pulickel M; Terrones, Mauricio; Aliev, Ali; Chu, Pengfei; Zhang, Zhong; Baughman, Ray H; Chen, Yongsheng
2015-01-20
It is a challenge to fabricate graphene bulk materials with properties arising from the nature of individual graphene sheets, and which assemble into monolithic three-dimensional structures. Here we report the scalable self-assembly of randomly oriented graphene sheets into additive-free, essentially homogenous graphene sponge materials that provide a combination of both cork-like and rubber-like properties. These graphene sponges, with densities similar to air, display Poisson's ratios in all directions that are near-zero and largely strain-independent during reversible compression to giant strains. And at the same time, they function as enthalpic rubbers, which can recover up to 98% compression in air and 90% in liquids, and operate between -196 and 900 °C. Furthermore, these sponges provide reversible liquid absorption for hundreds of cycles and then discharge it within seconds, while still providing an effective near-zero Poisson's ratio.
Luck, Tobias; Motzek, Tom; Luppa, Melanie; Matschinger, Herbert; Fleischer, Steffen; Sesselmann, Yves; Roling, Gudrun; Beutner, Katrin; König, Hans-Helmut; Behrens, Johann; Riedel-Heller, Steffi G
2013-01-01
Background Falls in older people are a major public health issue, but the underlying causes are complex. We sought to evaluate the effectiveness of preventive home visits as a multifactorial, individualized strategy to reduce falls in community-dwelling older people. Methods Data were derived from a prospective randomized controlled trial with follow-up examination after 18 months. Two hundred and thirty participants (≥80 years of age) with functional impairment were randomized to intervention and control groups. The intervention group received up to three preventive home visits including risk assessment, home counseling intervention, and a booster session. The control group received no preventive home visits. Structured interviews at baseline and follow-up provided information concerning falls in both study groups. Random-effects Poisson regression evaluated the effect of preventive home visits on the number of falls controlling for covariates. Results Random-effects Poisson regression showed a significant increase in the number of falls between baseline and follow-up in the control group (incidence rate ratio 1.96) and a significant decrease in the intervention group (incidence rate ratio 0.63) controlling for age, sex, family status, level of care, and impairment in activities of daily living. Conclusion Our results indicate that a preventive home visiting program can be effective in reducing falls in community-dwelling older people. PMID:23788832
Representations of spacetime diffeomorphisms. I. Canonical parametrized field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isham, C.J.; Kuchar, K.V.
The super-Hamiltonian and supermomentum in canonical geometrodynamics or in a parametried field theory on a given Riemannian background have Poisson brackets which obey the Dirac relations. By smearing the supermomentum with vector fields VepsilonL Diff..sigma.. on the space manifold ..sigma.., the Lie algebra L Diff ..sigma.. of the spatial diffeomorphism group Diff ..sigma.. can be mapped antihomomorphically into the Poisson bracket algebra on the phase space of the system. The explicit dependence of the Poisson brackets between two super-Hamiltonians on canonical coordinates (spatial metrics in geometrodynamics and embedding variables in parametrized theories) is usually regarded as an indication that themore » Dirac relations cannot be connected with a representation of the complete Lie algebra L Diff M of spacetime diffeomorphisms.« less
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Ages of Records in Random Walks
NASA Astrophysics Data System (ADS)
Szabó, Réka; Vető, Bálint
2016-12-01
We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.
Slow diffusion by Markov random flights
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2018-06-01
We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
Impact of pore size variability and network coupling on electrokinetic transport in porous media
NASA Astrophysics Data System (ADS)
Alizadeh, Shima; Bazant, Martin Z.; Mani, Ali
2016-11-01
We have developed and validated an efficient and robust computational model to study the coupled fluid and ion transport through electrokinetic porous media, which are exposed to external gradients of pressure, electric potential, and concentration. In our approach a porous media is modeled as a network of many pores through which the transport is described by the coupled Poisson-Nernst-Planck-Stokes equations. When the pore sizes are random, the interactions between various modes of transport may provoke complexities such as concentration polarization shocks and internal flow circulations. These phenomena impact mixing and transport in various systems including deionization and filtration systems, supercapacitors, and lab-on-a-chip devices. In this work, we present simulations of massive networks of pores and we demonstrate the impact of pore size variation, and pore-pore coupling on the overall electrokinetic transport in porous media.
The Semigeostrophic Equations Discretized in Reference and Dual Variables
NASA Astrophysics Data System (ADS)
Cullen, Mike; Gangbo, Wilfrid; Pisante, Giovanni
2007-08-01
We study the evolution of a system of n particles {\\{(x_i, v_i)\\}_{i=1}n} in {mathbb{R}^{2d}} . That system is a conservative system with a Hamiltonian of the form {H[μ]=W22(μ, νn)} , where W 2 is the Wasserstein distance and μ is a discrete measure concentrated on the set {\\{(x_i, v_i)\\}_{i=1}n} . Typically, μ(0) is a discrete measure approximating an initial L ∞ density and can be chosen randomly. When d = 1, our results prove convergence of the discrete system to a variant of the semigeostrophic equations. We obtain that the limiting densities are absolutely continuous with respect to the Lebesgue measure. When {\\{ν^n\\}_{n=1}^infty} converges to a measure concentrated on a special d-dimensional set, we obtain the Vlasov-Monge-Ampère (VMA) system. When, d = 1 the VMA system coincides with the standard Vlasov-Poisson system.
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Punctuated equilibrium dynamics in human communications
NASA Astrophysics Data System (ADS)
Peng, Dan; Han, Xiao-Pu; Wei, Zong-Wen; Wang, Bing-Hong
2015-10-01
A minimal model based on network incorporating individual interactions is proposed to study the non-Poisson statistical properties of human behavior: individuals in system interact with their neighbors, the probability of an individual acting correlates to its activity, and all the individuals involved in action will change their activities randomly. The model reproduces varieties of spatial-temporal patterns observed in empirical studies of human daily communications, providing insight into various human activities and embracing a range of realistic social interacting systems, particularly, intriguing bimodal phenomenon. This model bridges priority queueing theory and punctuated equilibrium dynamics, and our modeling and analysis is likely to shed light on non-Poisson phenomena in many complex systems.
A New Model that Generates Lotka's Law.
ERIC Educational Resources Information Center
Huber, John C.
2002-01-01
Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)
1980-11-26
and J.B. Thomas, "The Effect of a Memoryless Nonlinearity on the Spectrum of a Random Process," IEEE Transactions on Information Theory, Vol. IT-23, pp...Density Function from Measurements Corrupted by Poisson Noise," IEEE Transactions on Information Theory, Vol. IT-23, pp. 764-766, November 1977. H. Derin...pp. 243-249, December 1977. G.L. Wise and N.C. Gallagher, "On Spherically Invariant Random Processes," IEEE Transactions on Information Theory, Vol. IT
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
An analytic solution of the stochastic storage problem applicable to soil water
Milly, P.C.D.
1993-01-01
The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2014-01-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
Loukiadis, Estelle; Bièche-Terrier, Clémence; Malayrat, Catherine; Ferré, Franck; Cartier, Philippe; Augustin, Jean-Christophe
2017-06-05
Undercooked ground beef is regularly implicated in food-borne outbreaks involving pathogenic Shiga toxin-producing Escherichia coli. The dispersion of bacteria during mixing processes is of major concern for quantitative microbiological risk assessment since clustering will influence the number of bacteria the consumers might get exposed to as well as the performance of sampling plans used to detect contaminated ground beef batches. In this study, batches of 25kg of ground beef were manufactured according to a process mimicking an industrial-scale grinding with three successive steps: primary grinding, mixing and final grinding. The ground beef batches were made with 100% of chilled trims or with 2/3 of chilled trims and 1/3 of frozen trims. Prior grinding, one beef trim was contaminated with approximately 10 6 -10 7 CFU of E. coli O157:H7 on a surface of 0.5cm 2 to reach a concentration of 10-100cells/g in ground beef. The E. coli O157:H7 distribution in ground beef was characterized by enumerating 60 samples (20 samples of 5g, 20 samples of 25g and 20 samples of 100g) and fitting a Poisson-gamma model to describe the variability of bacterial counts. The shape parameter of the gamma distribution, also known as the dispersion parameter reflecting the amount of clustering, was estimated between 1.0 and 1.6. This k-value of approximately 1 expresses a moderate level of clustering of bacterial cells in the ground beef. The impact of this clustering on the performance of sampling strategies was relatively limited in comparison to the classical hypothesis of a random repartition of pathogenic cells in mixed materials (purely Poisson distribution instead of Poisson-gamma distribution). Copyright © 2017 Elsevier B.V. All rights reserved.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Chen, Da; Zheng, Xiaoyu
2018-06-14
Nature has evolved with a recurring strategy to achieve unusual mechanical properties through coupling variable elastic moduli from a few GPa to below KPa within a single tissue. The ability to produce multi-material, three-dimensional (3D) micro-architectures with high fidelity incorporating dissimilar components has been a major challenge in man-made materials. Here we show multi-modulus metamaterials whose architectural element is comprised of encoded elasticity ranging from rigid to soft. We found that, in contrast to ordinary architected materials whose negative Poisson's ratio is dictated by their geometry, these type of metamaterials are capable of displaying Poisson's ratios from extreme negative to zero, independent of their 3D micro-architecture. The resulting low density metamaterials is capable of achieving functionally graded, distributed strain amplification capabilities within the metamaterial with uniform micro-architectures. Simultaneous tuning of Poisson's ratio and moduli within the 3D multi-materials could open up a broad array of material by design applications ranging from flexible armor, artificial muscles, to actuators and bio-mimetic materials.
NASA Astrophysics Data System (ADS)
Tóth, B.; Lillo, F.; Farmer, J. D.
2010-11-01
We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Chen, Junning; Suenaga, Hanako; Hogg, Michael; Li, Wei; Swain, Michael; Li, Qing
2016-01-01
Despite their considerable importance to biomechanics, there are no existing methods available to directly measure apparent Poisson's ratio and friction coefficient of oral mucosa. This study aimed to develop an inverse procedure to determine these two biomechanical parameters by utilizing in vivo experiment of contact pressure between partial denture and beneath mucosa through nonlinear finite element (FE) analysis and surrogate response surface (RS) modelling technique. First, the in vivo denture-mucosa contact pressure was measured by a tactile electronic sensing sheet. Second, a 3D FE model was constructed based on the patient CT images. Third, a range of apparent Poisson's ratios and the coefficients of friction from literature was considered as the design variables in a series of FE runs for constructing a RS surrogate model. Finally, the discrepancy between computed in silico and measured in vivo results was minimized to identify the best matching Poisson's ratio and coefficient of friction. The established non-invasive methodology was demonstrated effective to identify such biomechanical parameters of oral mucosa and can be potentially used for determining the biomaterial properties of other soft biological tissues.
Universal Hitting Time Statistics for Integrable Flows
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.; Marklof, Jens; Strömbergsson, Andreas
2017-02-01
The perceived randomness in the time evolution of "chaotic" dynamical systems can be characterized by universal probabilistic limit laws, which do not depend on the fine features of the individual system. One important example is the Poisson law for the times at which a particle with random initial data hits a small set. This was proved in various settings for dynamical systems with strong mixing properties. The key result of the present study is that, despite the absence of mixing, the hitting times of integrable flows also satisfy universal limit laws which are, however, not Poisson. We describe the limit distributions for "generic" integrable flows and a natural class of target sets, and illustrate our findings with two examples: the dynamics in central force fields and ellipse billiards. The convergence of the hitting time process follows from a new equidistribution theorem in the space of lattices, which is of independent interest. Its proof exploits Ratner's measure classification theorem for unipotent flows, and extends earlier work of Elkies and McMullen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Nunno, Giulia, E-mail: giulian@math.uio.no; Khedher, Asma, E-mail: asma.khedher@tum.de; Vanmaele, Michèle, E-mail: michele.vanmaele@ugent.be
We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure withmore » infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk.« less
Predicting Coastal Flood Severity using Random Forest Algorithm
NASA Astrophysics Data System (ADS)
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2017-12-01
Coastal floods have become more common recently and are predicted to further increase in frequency and severity due to sea level rise. Predicting floods in coastal cities can be difficult due to the number of environmental and geographic factors which can influence flooding events. Built stormwater infrastructure and irregular urban landscapes add further complexity. This paper demonstrates the use of machine learning algorithms in predicting street flood occurrence in an urban coastal setting. The model is trained and evaluated using data from Norfolk, Virginia USA from September 2010 - October 2016. Rainfall, tide levels, water table levels, and wind conditions are used as input variables. Street flooding reports made by city workers after named and unnamed storm events, ranging from 1-159 reports per event, are the model output. Results show that Random Forest provides predictive power in estimating the number of flood occurrences given a set of environmental conditions with an out-of-bag root mean squared error of 4.3 flood reports and a mean absolute error of 0.82 flood reports. The Random Forest algorithm performed much better than Poisson regression. From the Random Forest model, total daily rainfall was by far the most important factor in flood occurrence prediction, followed by daily low tide and daily higher high tide. The model demonstrated here could be used to predict flood severity based on forecast rainfall and tide conditions and could be further enhanced using more complete street flooding data for model training.
Analysis of Machine Learning Techniques for Heart Failure Readmissions.
Mortazavi, Bobak J; Downing, Nicholas S; Bucholz, Emily M; Dharmarajan, Kumar; Manhapra, Ajay; Li, Shu-Xia; Negahban, Sahand N; Krumholz, Harlan M
2016-11-01
The current ability to predict readmissions in patients with heart failure is modest at best. It is unclear whether machine learning techniques that address higher dimensional, nonlinear relationships among variables would enhance prediction. We sought to compare the effectiveness of several machine learning algorithms for predicting readmissions. Using data from the Telemonitoring to Improve Heart Failure Outcomes trial, we compared the effectiveness of random forests, boosting, random forests combined hierarchically with support vector machines or logistic regression (LR), and Poisson regression against traditional LR to predict 30- and 180-day all-cause readmissions and readmissions because of heart failure. We randomly selected 50% of patients for a derivation set, and a validation set comprised the remaining patients, validated using 100 bootstrapped iterations. We compared C statistics for discrimination and distributions of observed outcomes in risk deciles for predictive range. In 30-day all-cause readmission prediction, the best performing machine learning model, random forests, provided a 17.8% improvement over LR (mean C statistics, 0.628 and 0.533, respectively). For readmissions because of heart failure, boosting improved the C statistic by 24.9% over LR (mean C statistic 0.678 and 0.543, respectively). For 30-day all-cause readmission, the observed readmission rates in the lowest and highest deciles of predicted risk with random forests (7.8% and 26.2%, respectively) showed a much wider separation than LR (14.2% and 16.4%, respectively). Machine learning methods improved the prediction of readmission after hospitalization for heart failure compared with LR and provided the greatest predictive range in observed readmission rates. © 2016 American Heart Association, Inc.
Six-component semi-discrete integrable nonlinear Schrödinger system
NASA Astrophysics Data System (ADS)
Vakhnenko, Oleksiy O.
2018-01-01
We suggest the six-component integrable nonlinear system on a quasi-one-dimensional lattice. Due to its symmetrical form, the general system permits a number of reductions; one of which treated as the semi-discrete integrable nonlinear Schrödinger system on a lattice with three structural elements in the unit cell is considered in considerable details. Besides six truly independent basic field variables, the system is characterized by four concomitant fields whose background values produce three additional types of inter-site resonant interactions between the basic fields. As a result, the system dynamics becomes associated with the highly nonstandard form of Poisson structure. The elementary Poisson brackets between all field variables are calculated and presented explicitly. The richness of system dynamics is demonstrated on the multi-component soliton solution written in terms of properly parameterized soliton characteristics.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, S; Clark, J; Mawlawi, O
2014-06-15
Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Generalized master equations for non-Poisson dynamics on networks.
Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
Generalized master equations for non-Poisson dynamics on networks
NASA Astrophysics Data System (ADS)
Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud
2012-10-01
The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.
NASA Technical Reports Server (NTRS)
Ingels, Frank; Owens, John; Daniel, Steven
1989-01-01
The protocol definition and terminal hardware for the modified free access protocol, a communications protocol similar to Ethernet, are developed. A MFA protocol simulator and a CSMA/CD math model are also developed. The protocol is tailored to communication systems where the total traffic may be divided into scheduled traffic and Poisson traffic. The scheduled traffic should occur on a periodic basis but may occur after a given event such as a request for data from a large number of stations. The Poisson traffic will include alarms and other random traffic. The purpose of the protocol is to guarantee that scheduled packets will be delivered without collision. This is required in many control and data collection systems. The protocol uses standard Ethernet hardware and software requiring minimum modifications to an existing system. The modification to the protocol only affects the Ethernet transmission privileges and does not effect the Ethernet receiver.
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.
Chen, Duan
2017-11-01
In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
Fractional properties of geophysical field variability on the example of hydrochemical parameters
NASA Astrophysics Data System (ADS)
Shevtsov, Boris; Shevtsova, Olga
2017-10-01
Using the properties of compound Poisson process and its fractional generalizations, statistical models of geophysical fields variability are considered on an example of hydrochemical parameters system. These models are universal to describe objects of different nature and allow us to explain various pulsing regime. Manifestations of non-conservatism in hydrochemical parameters system and the advantages of the system approach in the description of geophysical fields variability are discussed.
NASA Astrophysics Data System (ADS)
Banerji, Anirban; Magarkar, Aniket
2012-09-01
We feel happy when web browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web browsing? We propose a probabilistic framework that models the evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of a random number of random terms, where each term is a mutually independent random variable, originating from ‘memoryless’ Poisson flow. Evolution of satisfaction over the entire time interval of a user’s browsing was modeled using auto-correlation analysis. A utilitarian marker, a magnitude of greater than unity of which describes happy web-searching operations, and an empirical limit that connects user’s satisfaction with his frustration level-are proposed too. The presence of pertinent information in the very first page of a website and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.) are found to be two key aspects that dominate the web user’s psychology. The proposed model employed different combinations of decay parameter, searching time and number of helpful websites. The obtained results are found to match the results from three real-life case studies.
Application of spatial Poisson process models to air mass thunderstorm rainfall
NASA Technical Reports Server (NTRS)
Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.
1987-01-01
Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.
Smooth invariant densities for random switching on the torus
NASA Astrophysics Data System (ADS)
Bakhtin, Yuri; Hurth, Tobias; Lawley, Sean D.; Mattingly, Jonathan C.
2018-04-01
We consider a random dynamical system obtained by switching between the flows generated by two smooth vector fields on the 2d-torus, with the random switchings happening according to a Poisson process. Assuming that the driving vector fields are transversal to each other at all points of the torus and that each of them allows for a smooth invariant density and no periodic orbits, we prove that the switched system also has a smooth invariant density, for every switching rate. Our approach is based on an integration by parts formula inspired by techniques from Malliavin calculus.
Intermediate quantum maps for quantum computation
NASA Astrophysics Data System (ADS)
Giraud, O.; Georgeot, B.
2005-10-01
We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel
1992-01-01
A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Saez, M; Figueiras, A; Ballester, F; Pérez-Hoyos, S; Ocaña, R; Tobías, A
2001-06-01
The objective of this paper is to introduce a different approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. The approach is illustrated by providing quantitative estimates of the short-term effects of air pollution on mortality in three Spanish cities, Barcelona, Valencia and Vigo, for the period 1992-1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were non-linear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 microg/m(3) and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 microg/m(3) of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant, especially those among air pollutants and meteorological variables. Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities.
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.
Nilsen, Vegard; Wyller, John
2016-01-01
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
[Socio-demographic and food insecurity characteristics of soup-kitchen users in Brazil].
Godoy, Kátia Cruz; Sávio, Karin Eleonora Oliveira; Akutsu, Rita de Cássia; Gubert, Muriel Bauermann; Botelho, Raquel Braz Assunção
2014-06-01
This study aimed to characterize users of a government soup-kitchen program and the association with family food insecurity, using a cross-sectional design and random sample of 1,637 soup-kitchen users. The study used a questionnaire with socioeconomic variables and the Brazilian Food Insecurity Scale, and measured weight and height. The chi-square test was applied, and the crude and adjusted prevalence ratios (PR) were calculated using Poisson regression. Prevalent characteristics included per capita income ranging from one-half to one minimum wage (35.1%), complete middle school (39.8%), and food security (59.4%). Users in the North of Brazil showed the worst data: incomplete primary school (39.8%), per capita income up to one-half the minimum wage (50.8%), and food insecurity (55.5%). Prevalence ratios for food insecurity were higher among users with per capita income up to one-fourth the minimum wage (p < 0.05). Income was the only variable that remained associated with higher prevalence of food insecurity in the adjusted PR. Knowing the characteristics of soup-kitchen users with food insecurity can help orient the program's work, location, and operations.
On a Stochastic Failure Model under Random Shocks
NASA Astrophysics Data System (ADS)
Cha, Ji Hwan
2013-02-01
In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2017-08-01
Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables. PMID:26808311
Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables.
Disease Mapping of Zero-excessive Mesothelioma Data in Flanders
Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel
2016-01-01
Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590
Disease mapping of zero-excessive mesothelioma data in Flanders.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel
2017-01-01
To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.
Tumas, Natalia; Pou, Sonia Alejandra; Díaz, María Del Pilar
To identify sociodemographic determinants associated with the spatial distribution of the breast cancer incidence in the province of Córdoba, Argentina, in order to reveal underlying social inequities. An ecological study was developed in Córdoba (26 counties as geographical units of analysis). The spatial autocorrelation of the crude and standardised incidence rates of breast cancer, and the sociodemographic indicators of urbanization, fertility and population ageing were estimated using Moran's index. These variables were entered into a Geographic Information System for mapping. Poisson multilevel regression models were adjusted, establishing the breast cancer incidence rates as the response variable, and by selecting sociodemographic indicators as covariables and the percentage of households with unmet basic needs as adjustment variables. In Córdoba, Argentina, a non-random pattern in the spatial distribution of breast cancer incidence rates and in certain sociodemographic indicators was found. The mean increase in annual urban population was inversely associated with breast cancer, whereas the proportion of households with unmet basic needs was directly associated with this cancer. Our results define social inequity scenarios that partially explain the geographical differentials in the breast cancer burden in Córdoba, Argentina. Women residing in socioeconomically disadvantaged households and in less urbanized areas merit special attention in future studies and in breast cancer public health activities. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market
NASA Astrophysics Data System (ADS)
Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako
Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.
Seong, Ki Moon; Park, Hweon; Kim, Seong Jung; Ha, Hyo Nam; Lee, Jae Yung; Kim, Joon
2007-06-01
A yeast transcriptional activator, Gcn4p, induces the expression of genes that are involved in amino acid and purine biosynthetic pathways under amino acid starvation. Gcn4p has an acidic activation domain in the central region and a bZIP domain in the C-terminus that is divided into the DNA-binding motif and dimerization leucine zipper motif. In order to identify amino acids in the DNA-binding motif of Gcn4p which are involved in transcriptional activation, we constructed mutant libraries in the DNA-binding motif through an innovative application of random mutagenesis. Mutant library made by oligonucleotides which were mutated randomly using the Poisson distribution showed that the actual mutation frequency was in good agreement with expected values. This method could save the time and effort to create a mutant library with a predictable mutation frequency. Based on the studies using the mutant libraries constructed by the new method, the specific residues of the DNA-binding domain in Gcn4p appear to be involved in the transcriptional activities on a conserved binding site.
The origin of bursts and heavy tails in human dynamics.
Barabási, Albert-László
2005-05-12
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.
NASA Astrophysics Data System (ADS)
Korobov, A.
2011-08-01
Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.
Korobov, A
2011-08-01
Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
On the estimation variance for the specific Euler-Poincaré characteristic of random networks.
Tscheschel, A; Stoyan, D
2003-07-01
The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.
NASA Technical Reports Server (NTRS)
Tilley, David G.
1987-01-01
NASA Space Shuttle Challenger SIR-B ocean scenes are used to derive directional wave spectra for which speckle noise is modeled as a function of Rayleigh random phase coherence downrange and Poisson random amplitude errors inherent in the Doppler measurement of along-track position. A Fourier filter that preserves SIR-B image phase relations is used to correct the stationary and dynamic response characteristics of the remote sensor and scene correlator, as well as to subtract an estimate of the speckle noise component. A two-dimensional map of sea surface elevation is obtained after the filtered image is corrected for both random and deterministic motions.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2016-04-01
We introduce a new class of stochastic processes in
Generating clustered scale-free networks using Poisson based localization of edges
NASA Astrophysics Data System (ADS)
Türker, İlker
2018-05-01
We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Chavanis, P H; Delfini, L
2014-03-01
We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Implementation of a quantum random number generator based on the optimal clustering of photocounts
NASA Astrophysics Data System (ADS)
Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.
2017-10-01
To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
NASA Astrophysics Data System (ADS)
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
A cross-comparison of different techniques for modeling macro-level cyclist crashes.
Guo, Yanyong; Osama, Ahmed; Sayed, Tarek
2018-04-01
Despite the recognized benefits of cycling as a sustainable mode of transportation, cyclists are considered vulnerable road users and there are concerns about their safety. Therefore, it is essential to investigate the factors affecting cyclist safety. The goal of this study is to evaluate and compare different approaches of modeling macro-level cyclist safety as well as investigating factors that contribute to cyclist crashes using a comprehensive list of covariates. Data from 134 traffic analysis zones (TAZs) in the City of Vancouver were used to develop macro-level crash models (CM) incorporating variables related to actual traffic exposure, socio-economics, land use, built environment, and bike network. Four types of CMs were developed under a full Bayesian framework: Poisson lognormal model (PLN), random intercepts PLN model (RIPLN), random parameters PLN model (RPPLN), and spatial PLN model (SPLN). The SPLN model had the best goodness of fit, and the results highlighted the significant effects of spatial correlation. The models showed that the cyclist crashes were positively associated with bike and vehicle exposure measures, households, commercial area density, and signal density. On the other hand, negative associations were found between cyclist crashes and some bike network indicators such as average edge length, average zonal slope, and off-street bike links. Copyright © 2018 Elsevier Ltd. All rights reserved.
Filtrations on Springer fiber cohomology and Kostka polynomials
NASA Astrophysics Data System (ADS)
Bellamy, Gwyn; Schedler, Travis
2018-03-01
We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.
Weber's law implies neural discharge more regular than a Poisson process.
Kang, Jing; Wu, Jianhua; Smerieri, Anteo; Feng, Jianfeng
2010-03-01
Weber's law is one of the basic laws in psychophysics, but the link between this psychophysical behavior and the neuronal response has not yet been established. In this paper, we carried out an analysis on the spike train statistics when Weber's law holds, and found that the efferent spike train of a single neuron is less variable than a Poisson process. For population neurons, Weber's law is satisfied only when the population size is small (< 10 neurons). However, if the population neurons share a weak correlation in their discharges and individual neuronal spike train is more regular than a Poisson process, Weber's law is true without any restriction on the population size. Biased competition attractor network also demonstrates that the coefficient of variation of interspike interval in the winning pool should be less than one for the validity of Weber's law. Our work links Weber's law with neural firing property quantitatively, shedding light on the relation between psychophysical behavior and neuronal responses.
Factor-Analysis Methods for Higher-Performance Neural Prostheses
Santhanam, Gopal; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Afshar, Afsheen; Sahani, Maneesh; Shenoy, Krishna V.
2009-01-01
Neural prostheses aim to provide treatment options for individuals with nervous-system disease or injury. It is necessary, however, to increase the performance of such systems before they can be clinically viable for patients with motor dysfunction. One performance limitation is the presence of correlated trial-to-trial variability that can cause neural responses to wax and wane in concert as the subject is, for example, more attentive or more fatigued. If a system does not properly account for this variability, it may mistakenly interpret such variability as an entirely different intention by the subject. We report here the design and characterization of factor-analysis (FA)–based decoding algorithms that can contend with this confound. We characterize the decoders (classifiers) on experimental data where monkeys performed both a real reach task and a prosthetic cursor task while we recorded from 96 electrodes implanted in dorsal premotor cortex. The decoder attempts to infer the underlying factors that comodulate the neurons' responses and can use this information to substantially lower error rates (one of eight reach endpoint predictions) by ≲75% (e.g., ∼20% total prediction error using traditional independent Poisson models reduced to ∼5%). We also examine additional key aspects of these new algorithms: the effect of neural integration window length on performance, an extension of the algorithms to use Poisson statistics, and the effect of training set size on the decoding accuracy of test data. We found that FA-based methods are most effective for integration windows >150 ms, although still advantageous at shorter timescales, that Gaussian-based algorithms performed better than the analogous Poisson-based algorithms and that the FA algorithm is robust even with a limited amount of training data. We propose that FA-based methods are effective in modeling correlated trial-to-trial neural variability and can be used to substantially increase overall prosthetic system performance. PMID:19297518
Travelling Randomly on the Poincaré Half-Plane with a Pythagorean Compass
NASA Astrophysics Data System (ADS)
Cammarota, V.; Orsingher, E.
2008-02-01
A random motion on the Poincaré half-plane is studied. A particle runs on the geodesic lines changing direction at Poisson-paced times. The hyperbolic distance is analyzed, also in the case where returns to the starting point are admitted. The main results concern the mean hyperbolic distance (and also the conditional mean distance) in all versions of the motion envisaged. Also an analogous motion on orthogonal circles of the sphere is examined and the evolution of the mean distance from the starting point is investigated.
Morphology and linear-elastic moduli of random network solids.
Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E
2011-06-17
The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Acute childhood asthma in Galway city from 1985-2005: relationship to air pollution and climate.
Loftus, A; Loftus, B G; Muircheartaigh, I O; Newell, J; Scarrott, C; Jennings, S
2014-01-01
We examine the relationship of air pollution and climatic variables to asthma admission rates of children in Galway city over a 21 year period. Paediatric asthma admissions were recorded from 1985-2005, and admission rates per thousand calculated for pre-school (1-4 years), school aged (5-14 years) and all children (1-14 years) on a monthly and annual basis. These data were compared to average monthly and annual climatic variables (rainfall, humidity, sunshine, wind speed and temperature) and black smoke levels for the city. Simple correlation and Poisson Generalized Additive Models (GAM) were used. Admission rates each month are significantly correlated with smoke levels (p = 0.007). Poisson GAM also shows a relationship between admissions and pollution (p = 0.07). Annual smoke levels impact more on admission rates of preschoolers (p = 0.04) than school age children (p = 0.10). These data suggest that air pollution is an important factor in the epidemiology of acute childhood asthma.
SIERRA - A 3-D device simulator for reliability modeling
NASA Astrophysics Data System (ADS)
Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.
1989-05-01
SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.
NASA Technical Reports Server (NTRS)
Fujii, K.
1983-01-01
A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.
The Chaotic Light Curves of Accreting Black Holes
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes
2007-01-01
We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Ortega Hinojosa, Alberto M; Davies, Molly M; Jarjour, Sarah; Burnett, Richard T; Mann, Jennifer K; Hughes, Edward; Balmes, John R; Turner, Michelle C; Jerrett, Michael
2014-10-01
Globally and in the United States, smoking and obesity are leading causes of death and disability. Reliable estimates of prevalence for these risk factors are often missing variables in public health surveillance programs. This may limit the capacity of public health surveillance to target interventions or to assess associations between other environmental risk factors (e.g., air pollution) and health because smoking and obesity are often important confounders. To generate prevalence estimates of smoking and obesity rates over small areas for the United States (i.e., at the ZIP code and census tract levels). We predicted smoking and obesity prevalence using a combined approach first using a lasso-based variable selection procedure followed by a two-level random effects regression with a Poisson link clustered on state and county. We used data from the Behavioral Risk Factor Surveillance System (BRFSS) from 1991 to 2010 to estimate the model. We used 10-fold cross-validated mean squared errors and the variance of the residuals to test our model. To downscale the estimates we combined the prediction equations with 1990 and 2000 U.S. Census data for each of the four five-year time periods in this time range at the ZIP code and census tract levels. Several sensitivity analyses were conducted using models that included only basic terms, that accounted for spatial autocorrelation, and used Generalized Linear Models that did not include random effects. The two-level random effects model produced improved estimates compared to the fixed effects-only models. Estimates were particularly improved for the two-thirds of the conterminous U.S. where BRFSS data were available to estimate the county level random effects. We downscaled the smoking and obesity rate predictions to derive ZIP code and census tract estimates. To our knowledge these smoking and obesity predictions are the first to be developed for the entire conterminous U.S. for census tracts and ZIP codes. Our estimates could have significant utility for public health surveillance. Copyright © 2014. Published by Elsevier Inc.
Saez, M; Figueiras, A; Ballester, F; Perez-Hoyos, S; Ocana, R; Tobias, A
2001-01-01
STUDY OBJECTIVE—The objective of this paper is to introduce a different approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. DESIGN—The approach is illustrated by providing quantitative estimates of the short-term effects of air pollution on mortality in three Spanish cities, Barcelona, Valencia and Vigo, for the period 1992-1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were non-linear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. MAIN RESULTS—The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 µg/m3 and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 µg/m3 of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant, especially those among air pollutants and meteorological variables. CONCLUSIONS—Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities. Keywords: air pollution; mortality; longitudinal studies PMID:11351001
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.
Wilkes, E J A; Cowling, A; Woodgate, R G; Hughes, K J
2016-10-15
Faecal egg counts (FEC) are used widely for monitoring of parasite infection in animals, treatment decision-making and estimation of anthelmintic efficacy. When a single count or sample mean is used as a point estimate of the expectation of the egg distribution over some time interval, the variability in the egg density is not accounted for. Although variability, including quantifying sources, of egg count data has been described, the spatiotemporal distribution of nematode eggs in faeces is not well understood. We believe that statistical inference about the mean egg count for treatment decision-making has not been used previously. The aim of this study was to examine the density of Parascaris eggs in solution and faeces and to describe the use of hypothesis testing for decision-making. Faeces from two foals with Parascaris burdens were mixed with magnesium sulphate solution and 30 McMaster chambers were examined to determine the egg distribution in a well-mixed solution. To examine the distribution of eggs in faeces from an individual animal, three faecal piles from a foal with a known Parascaris burden were obtained, from which 81 counts were performed. A single faecal sample was also collected daily from 20 foals on three consecutive days and a FEC was performed on three separate portions of each sample. As appropriate, Poisson or negative binomial confidence intervals for the distribution mean were calculated. Parascaris eggs in a well-mixed solution conformed to a homogeneous Poisson process, while the egg density in faeces was not homogeneous, but aggregated. This study provides an extension from homogeneous to inhomogeneous Poisson processes, leading to an understanding of why Poisson and negative binomial distributions correspondingly provide a good fit for egg count data. The application of one-sided hypothesis tests for decision-making is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
NASA Astrophysics Data System (ADS)
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Fock space representation for the quantum Lorentz gas
NASA Astrophysics Data System (ADS)
Maassen, H.; Tip, A.
1995-02-01
A Fock space representation is given for the quantum Lorentz gas, i.e., for random Schrödinger operators of the form H(ω)=p2+Vω=p2+∑ φ(x-xj(ω)), acting in H=L2(Rd), with Poisson distributed xjs. An operator H is defined in K=H⊗P=H⊗L2(Ω,P(dω))=L2(Ω,P(dω);H) by the action of H(ω) on its fibers in a direct integral decomposition. The stationarity of the Poisson process allows a unitarily equivalent description in terms of a new family {H(k)||k∈Rd}, where each H(k) acts in P [A. Tip, J. Math. Phys. 35, 113 (1994)]. The space P is then unitarily mapped upon the symmetric Fock space over L2(Rd,ρdx), with ρ the intensity of the Poisson process (the average number of points xj per unit volume; the scatterer density), and the equivalent of H(k) is determined. Averages now become vacuum expectation values and a further unitary transformation (removing ρ in ρdx) is made which leaves the former invariant. The resulting operator HF(k) has an interesting structure: On the nth Fock layer we encounter a single particle moving in the field of n scatterers and the randomness now appears in the coefficient √ρ in a coupling term connecting neighboring Fock layers. We also give a simple direct self-adjointness proof for HF(k), based upon Nelson's commutator theorem. Restriction to a finite number of layers (a kind of low scatterer density approximation) still gives nontrivial results, as is demonstrated by considering an example.
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Modeling sediment transport as a spatio-temporal Markov process.
NASA Astrophysics Data System (ADS)
Heyman, Joris; Ancey, Christophe
2014-05-01
Despite a century of research about sediment transport by bedload occuring in rivers, its constitutive laws remain largely unknown. The proof being that our ability to predict mid-to-long term transported volumes within reasonable confidence interval is almost null. The intrinsic fluctuating nature of bedload transport may be one of the most important reasons why classical approaches fail. Microscopic probabilistic framework has the advantage of taking into account these fluctuations at the particle scale, to understand their effect on the macroscopic variables such as sediment flux. In this framework, bedload transport is seen as the random motion of particles (sand, gravel, pebbles...) over a two-dimensional surface (the river bed). The number of particles in motion, as well as their velocities, are random variables. In this talk, we show how a simple birth-death Markov model governing particle motion on a regular lattice accurately reproduces the spatio-temporal correlations observed at the macroscopic level. Entrainment, deposition and transport of particles by the turbulent fluid (air or water) are supposed to be independent and memoryless processes that modify the number of particles in motion. By means of the Poisson representation, we obtained a Fokker-Planck equation that is exactly equivalent to the master equation and thus valid for all cell sizes. The analysis shows that the number of moving particles evolves locally far from thermodynamic equilibrium. Several analytical results are presented and compared to experimental data. The index of dispersion (or variance over mean ratio) is proved to grow from unity at small scales to larger values at larger scales confirming the non Poisonnian behavior of bedload transport. Also, we study the one and two dimensional K-function, which gives the average number of moving particles located in a ball centered at a particle centroid function of the ball's radius.
NASA Astrophysics Data System (ADS)
Zaitsev, Vladimir Y.; Radostin, Andrey V.; Dyskin, Arcady V.; Pasternak, Elena
2017-04-01
We report results of analysis of literature data on P- and S-wave velocities of rocks subjected to variable hydrostatic pressure. Out of about 90 examined samples, in more than 40% of the samples the reconstructed Poisson's ratios are negative for lowest confining pressure with gradual transition to the conventional positive values at higher pressure. The portion of rocks exhibiting negative Poisson's ratio appeared to be unexpectedly high. To understand the mechanism of negative Poisson's ratio, pressure dependences of P- and S-wave velocities were analyzed using the effective medium model in which the reduction in the elastic moduli due to cracks is described in terms of compliances with respect to shear and normal loading that are imparted to the rock by the presence of cracks. This is in contrast to widely used descriptions of effective cracked medium based on a specific crack model (e.g., penny-shape crack) in which the ratio between normal and shear compliances of such a crack is strictly predetermined. The analysis of pressure-dependences of the elastic wave velocities makes it possible to reveal the ratio between pure normal and shear compliances (called q-ratio below) for real defects and quantify their integral content in the rock. The examination performed demonstrates that a significant portion (over 50%) of cracks exhibit q-ratio several times higher than that assumed for the conventional penny-shape cracks. This leads to faster reduction of the Poisson's ratio with increasing the crack concentration. Samples with negative Poisson's ratio are characterized by elevated q-ratio and simultaneously crack concentration. Our results clearly indicate that the traditional crack model is not adequate for a significant portion of rocks and that the interaction between the opposite crack faces leading to domination of the normal compliance and reduced shear displacement discontinuity can play an important role in the mechanical behavior of rocks.
Grabska-Barwińska, Agnieszka; Latham, Peter E
2014-06-01
We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.
A heuristic for the distribution of point counts for random curves over a finite field.
Achter, Jeffrey D; Erman, Daniel; Kedlaya, Kiran S; Wood, Melanie Matchett; Zureick-Brown, David
2015-04-28
How many rational points are there on a random algebraic curve of large genus g over a given finite field Fq? We propose a heuristic for this question motivated by a (now proven) conjecture of Mumford on the cohomology of moduli spaces of curves; this heuristic suggests a Poisson distribution with mean q+1+1/(q-1). We prove a weaker version of this statement in which g and q tend to infinity, with q much larger than g. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Stochastic modeling of soil salinity
NASA Astrophysics Data System (ADS)
Suweis, S.; Porporato, A. M.; Daly, E.; van der Zee, S.; Maritan, A.; Rinaldo, A.
2010-12-01
A minimalist stochastic model of primary soil salinity is proposed, in which the rate of soil salinization is determined by the balance between dry and wet salt deposition and the intermittent leaching events caused by rainfall events. The equations for the probability density functions of salt mass and concentration are found by reducing the coupled soil moisture and salt mass balance equations to a single stochastic differential equation (generalized Langevin equation) driven by multiplicative Poisson noise. Generalized Langevin equations with multiplicative white Poisson noise pose the usual Ito (I) or Stratonovich (S) prescription dilemma. Different interpretations lead to different results and then choosing between the I and S prescriptions is crucial to describe correctly the dynamics of the model systems. We show how this choice can be determined by physical information about the timescales involved in the process. We also show that when the multiplicative noise is at most linear in the random variable one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We then apply these results to the generalized Langevin equation that drives the salt mass dynamics. The stationary analytical solutions for the probability density functions of salt mass and concentration provide insight on the interplay of the main soil, plant and climate parameters responsible for long term soil salinization. In particular, they show the existence of two distinct regimes, one where the mean salt mass remains nearly constant (or decreases) with increasing rainfall frequency, and another where mean salt content increases markedly with increasing rainfall frequency. As a result, relatively small reductions of rainfall in drier climates may entail dramatic shifts in longterm soil salinization trends, with significant consequences, e.g. for climate change impacts on rain fed agriculture.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
NASA Astrophysics Data System (ADS)
Brauer, Uwe; Karp, Lavi
2018-01-01
Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.
Paula, Janice S; Leite, Isabel Cg; Almeida, Anderso B; Ambrosano, Glaucia Mb; Pereira, Antônio C; Mialhe, Fábio L
2012-01-13
The objective this study was to investigate the influence of clinical conditions, socioeconomic status, home environment, subjective perceptions of parents and schoolchildren about general and oral health on schoolchildren's oral health-related quality of life (OHRQoL). A sample of 515 schoolchildren, aged 12 years was randomly selected by conglomerate analysis from public and private schools in the city of Juiz de Fora, Brazil. The schoolchildren were clinically examined for presence of caries lesions (DMFT and dmft index), dental trauma, enamel defects, periodontal status (presence/absence of bleeding), dental treatment and orthodontic treatment needs (DAI). The SiC index was calculated. The participants were asked to complete the Brazilian version of Child Perceptions Questionnaire (CPQ11-14) and a questionnaire about home environment. Questions were asked about the presence of general diseases and children's self-perception of their general and oral health status. In addition, a questionnaire was sent to their parents inquiring about their socioeconomic status (family income, parents' education level, home ownership) and perceptions about the general and oral health of their school-aged children. The chi-square test was used for comparisons between proportions. Poisson's regression was used for multivariate analysis with adjustment for variances. Univariate analysis revealed that school type, monthly family income, mother's education, family structure, number of siblings, use of cigarettes, alcohol and drugs in the family, parents' perception of oral health of schoolchildren, schoolchildren's self perception their general and oral health, orthodontic treatment needs were significantly associated with poor OHRQoL (p < 0.001). After adjusting for potential confounders, variables were included in a Multivariate Poisson regression. It was found that the variables children's self perception of their oral health status, monthly family income, gender, orthodontic treatment need, mother's education, number of siblings, and household overcrowding showed a strong negative effect on oral health-related quality of life. It was concluded that the clinical, socioeconomic and home environment factors evaluated exerted a negative impact on the oral health-related quality of life of schoolchildren, demonstrating the importance of health managers addressing all these factors when planning oral health promotion interventions for this population.
Assessing agreement between malaria slide density readings.
Alexander, Neal; Schellenberg, David; Ngasala, Billy; Petzold, Max; Drakeley, Chris; Sutherland, Colin
2010-01-04
Several criteria have been used to assess agreement between replicate slide readings of malaria parasite density. Such criteria may be based on percent difference, or absolute difference, or a combination. Neither the rationale for choosing between these types of criteria, nor that for choosing the magnitude of difference which defines acceptable agreement, are clear. The current paper seeks a procedure which avoids the disadvantages of these current options and whose parameter values are more clearly justified. Variation of parasite density within a slide is expected, even when it has been prepared from a homogeneous sample. This places lower limits on sensitivity and observer agreement, quantified by the Poisson distribution. This means that, if a criterion of fixed percent difference criterion is used for satisfactory agreement, the number of discrepant readings is over-estimated at low parasite densities. With a criterion of fixed absolute difference, the same happens at high parasite densities. For an ideal slide, following the Poisson distribution, a criterion based on a constant difference in square root counts would apply for all densities. This can be back-transformed to a difference in absolute counts, which, as expected, gives a wider range of acceptable agreement at higher average densities. In an example dataset from Tanzania, observed differences in square root counts correspond to a 95% limits of agreement of -2,800 and +2,500 parasites/microl at average density of 2,000 parasites/microl, and -6,200 and +5,700 parasites/microl at 10,000 parasites/microl. However, there were more outliers beyond those ranges at higher densities, meaning that actual coverage of these ranges was not a constant 95%, but decreased with density. In a second study, a trial of microscopist training, the corresponding ranges of agreement are wider and asymmetrical: -8,600 to +5,200/microl, and -19,200 to +11,700/microl, respectively. By comparison, the optimal limits of agreement, corresponding to Poisson variation, are +/- 780 and +/- 1,800 parasites/microl, respectively. The focus of this approach on the volume of blood read leads to other conclusions. For example, no matter how large a volume of blood is read, some densities are too low to be reliably detected, which in turn means that disagreements on slide positivity may simply result from within-slide variation, rather than reading errors. The proposed method defines limits of acceptable agreement in a way which allows for the natural increase in variability with parasite density. This includes defining the levels of between-reader variability, which are consistent with random variation: disagreements within these limits should not trigger additional readings. This approach merits investigation in other settings, in order to determine both the extent of its applicability, and appropriate numerical values for limits of agreement.
Nonlinear Analysis of Experimental Measurements 7.6. Theoretical Chemistry
2015-01-26
Jianshu Cao, Robert J. Silbey, Jaeyoung Sung. Quantitative Interpretation of the Randomness in Single Enzyme Turnover Times, Biophysical Journal...Universality of Poisson Indicator and Fano Factor of Transport Event Statistics in Ion Channels and Enzyme Kinetics., J. Phys. B: At. Mol. Opt. Phys...TOTAL: 4 01/26/2015 Received Book 4.00 Jianshu Cao, Jianlan Wu. GENERALIZED MICHAELIS–MENTENEQUATION FOR CONFORMATION- MODULATEDMONOMERIC ENZYMES , New
Coma cluster ultradiffuse galaxies are not standard radio galaxies
NASA Astrophysics Data System (ADS)
Struble, Mitchell F.
2018-02-01
Matching members in the Coma cluster catalogue of ultradiffuse galaxies (UDGs) from SUBARU imaging with a very deep radio continuum survey source catalogue of the cluster using the Karl G. Jansky Very Large Array (VLA) within a rectangular region of ∼1.19 deg2 centred on the cluster core reveals matches consistent with random. An overlapping set of 470 UDGs and 696 VLA radio sources in this rectangular area finds 33 matches within a separation of 25 arcsec; dividing the sample into bins with separations bounded by 5, 10, 20 and 25 arcsec finds 1, 4, 17 and 11 matches. An analytical model estimate, based on the Poisson probability distribution, of the number of randomly expected matches within these same separation bounds is 1.7, 4.9, 19.4 and 14.2, each, respectively, consistent with the 95 per cent Poisson confidence intervals of the observed values. Dividing the data into five clustercentric annuli of 0.1° and into the four separation bins, finds the same result. This random match of UDGs with VLA sources implies that UDGs are not radio galaxies by the standard definition. Those VLA sources having integrated flux >1 mJy at 1.4 GHz in Miller, Hornschemeier and Mobasher without SDSS galaxy matches are consistent with the known surface density of background radio sources. We briefly explore the possibility that some unresolved VLA sources near UDGs could be young, compact, bright, supernova remnants of Type Ia events, possibly in the intracluster volume.
Exact solution of two interacting run-and-tumble random walkers with finite tumble duration
NASA Astrophysics Data System (ADS)
Slowman, A. B.; Evans, M. R.; Blythe, R. A.
2017-09-01
We study a model of interacting run-and-tumble random walkers operating under mutual hardcore exclusion on a one-dimensional lattice with periodic boundary conditions. We incorporate a finite, poisson-distributed, tumble duration so that a particle remains stationary whilst tumbling, thus generalising the persistent random walker model. We present the exact solution for the nonequilibrium stationary state of this system in the case of two random walkers. We find this to be characterised by two lengthscales, one arising from the jamming of approaching particles, and the other from one particle moving when the other is tumbling. The first of these lengthscales vanishes in a scaling limit where the continuous-space dynamics is recovered whilst the second remains finite. Thus the nonequilibrium stationary state reveals a rich structure of attractive, jammed and extended pieces.
Spectral multigrid methods for the solution of homogeneous turbulence problems
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.
1987-01-01
New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.
Star Products with Separation of Variables Admitting a Smooth Extension
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2012-08-01
Given a complex manifold M with an open dense subset Ω endowed with a pseudo-Kähler form ω which cannot be smoothly extended to a larger open subset, we consider various examples where the corresponding Kähler-Poisson structure and a star product with separation of variables on (Ω, ω) admit smooth extensions to M. We give a simple criterion of the existence of a smooth extension of a star product and apply it to these examples.
Nearest-Neighbor Distances and Aggregative Effects in Turbulence
NASA Astrophysics Data System (ADS)
Lanerolle, Lyon W. J.; Rothschild, B. J.; Yeung, P. K.
2000-11-01
The dispersive nature of turbulence which causes fluid elements to move apart (on average) is well known. Here we study another facet of turbulent mixing relevant to marine population dynamics - on how small organisms (approximated by fluid particles) are brought close to each other and allowed to interact. The crucial role played by the small scales in this process allows us to use direct numerical simulations of stationary isotropic turbulence, here with Taylor-scale Reynolds numbers (R_λ) from 38 to 91. We study the evolution of the Nearest-Neighbor Distances (NND) for collections of fluid particles initially located randomly in space satisfying Poisson-type distributions with mean values from 0.5 to 2.0 Kolmogorov length scales. Our results show that as particles begin to disperse on average, some also begin to aggregate in space. In particular, we find that (i) a significant proportion of particles are closer to each other than if their NNDs were randomly distributed, (ii) aggregative effects become stronger with R_λ, and (iii) although the mean value of NND grows monotonically with time in Kolmogorov variables, the growth rates are slower at higher R_λ. These results may assist in explaining the ``patchiness'' in plankton distributions observed in biological oceanography. Further details are given in B. J. Rothschild et al., The Biophysical Interpretation of Spatial Effects of Small-scale Turbulent Flow in the Ocean (paper in prep.).
Hauser, Robert A; Heritier, Stephane; Rowse, Gerald J; Hewitt, L Arthur; Isaacson, Stuart H
2016-01-01
Droxidopa is a prodrug of norepinephrine indicated for the treatment of orthostatic dizziness, lightheadedness, or the "feeling that you are about to black out" in adult patients with symptomatic neurogenic orthostatic hypotension caused by primary autonomic failure including Parkinson disease (PD). The objective of this study was to compare fall rates in PD patients with symptomatic neurogenic orthostatic hypotension randomized to droxidopa or placebo. Study NOH306 was a 10-week, phase 3, randomized, placebo-controlled, double-blind trial of droxidopa in PD patients with symptomatic neurogenic orthostatic hypotension that included assessments of falls as a key secondary end point. In this report, the principal analysis consisted of a comparison of the rate of patient-reported falls from randomization to end of study in droxidopa versus placebo groups. A total of 225 patients were randomized; 222 patients were included in the safety analyses, and 197 patients provided efficacy data and were included in the falls analyses. The 92 droxidopa patients reported 308 falls, and the 105 placebo patients reported 908 falls. In the droxidopa group, the fall rate was 0.4 falls per patient-week; in the placebo group, the rate was 1.05 falls per patient-week (prespecified Wilcoxon rank sum P = 0.704; post hoc Poisson-inverse Gaussian test P = 0.014), yielding a relative risk reduction of 77% using the Poisson-inverse Gaussian model. Fall-related injuries occurred in 16.7% of droxidopa-treated patients and 26.9% of placebo-treated patients. Treatment with droxidopa appears to reduce falls in PD patients with symptomatic neurogenic orthostatic hypotension, but this finding must be confirmed.
Hauser, Robert A.; Heritier, Stephane; Rowse, Gerald J.; Hewitt, L. Arthur; Isaacson, Stuart H.
2016-01-01
Objectives Droxidopa is a prodrug of norepinephrine indicated for the treatment of orthostatic dizziness, lightheadedness, or the “feeling that you are about to black out” in adult patients with symptomatic neurogenic orthostatic hypotension caused by primary autonomic failure including Parkinson disease (PD). The objective of this study was to compare fall rates in PD patients with symptomatic neurogenic orthostatic hypotension randomized to droxidopa or placebo. Methods Study NOH306 was a 10-week, phase 3, randomized, placebo-controlled, double-blind trial of droxidopa in PD patients with symptomatic neurogenic orthostatic hypotension that included assessments of falls as a key secondary end point. In this report, the principal analysis consisted of a comparison of the rate of patient-reported falls from randomization to end of study in droxidopa versus placebo groups. Results A total of 225 patients were randomized; 222 patients were included in the safety analyses, and 197 patients provided efficacy data and were included in the falls analyses. The 92 droxidopa patients reported 308 falls, and the 105 placebo patients reported 908 falls. In the droxidopa group, the fall rate was 0.4 falls per patient-week; in the placebo group, the rate was 1.05 falls per patient-week (prespecified Wilcoxon rank sum P = 0.704; post hoc Poisson-inverse Gaussian test P = 0.014), yielding a relative risk reduction of 77% using the Poisson-inverse Gaussian model. Fall-related injuries occurred in 16.7% of droxidopa-treated patients and 26.9% of placebo-treated patients. Conclusions Treatment with droxidopa appears to reduce falls in PD patients with symptomatic neurogenic orthostatic hypotension, but this finding must be confirmed. PMID:27332626
NASA Astrophysics Data System (ADS)
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
NASA Astrophysics Data System (ADS)
Kim, S. Y.; Oh, H. S.; Park, E. S.
2017-10-01
Herein, we elucidate a hidden variable in a shear transformation zone (STZ) volume (Ω) versus Poisson's ratio (ν) relation and clarify the correlation between STZ characteristics and the plasticity of metallic glasses (MGs). On the basis of cooperative shear model and atomic stress theories, we carefully formulate Ω as a function of molar volume (Vm) and ν. The twofold trend in Ω and ν is attributed to a relatively large variation of Vm as compared to that of ν as well as an inverse relation between Vm and ν. Indeed, the derived equation reveals that the number of atoms in an STZ instead of Ω is a microstructural characteristic which has a close relationship with plasticity since it reflects the preference of atomistic behaviors between cooperative shearing and the generation of volume strain fluctuation under stress. The results would deepen our understanding of the correlation between microscopic behaviors (STZ activation) and macroscopic properties (plasticity) in MGs and enable a quantitative approach in associating various STZ-related macroscopic behaviors with intrinsic properties of MGs.
Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca
2012-07-15
Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less
Lopes, Daniele Almeida; Moraes, Suzana Alves de; Freitas, Isabel Cristina Martins de
2015-01-01
To know the prevalence and factors associated to low cognitive performance in a representative sample of the adult population in a society aging progressively. Cross-sectional population-based study carried out in a three-stage sampling: 81 census tracts (primary sampling unity) were randomly selected, followed by 1,672 households and 2,471 participants (weighted sample) corresponding to the second and third stages, respectively. The outcome prevalence was calculated according sociodemographic, behavioral and health related variables. Crude and adjusted prevalence ratios were estimated using Poisson regression. The prevalence of low cognitive performance was high, mainly among females, and indicated linear trends into categories of age, schooling, income, plasma fibrinogen and self-reported health status. In multivariate models, gender, diabetes, fibrinogen and self-reported health status presented positive associations, while schooling, employment and sitting time presented negative associations with the outcome. Interventions related to diabetes and fibrinogen levels control as well as improvement in health care might delay low cognitive performance in societies aging progressively as such the study population.
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Comparison of RF spectrum prediction methods for dynamic spectrum access
NASA Astrophysics Data System (ADS)
Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.
2017-05-01
Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.
A stochastic-dynamic model for global atmospheric mass field statistics
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.
1981-01-01
A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.
Sojourning with the Homogeneous Poisson Process.
Liu, Piaomu; Peña, Edsel A
2016-01-01
In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals
NASA Astrophysics Data System (ADS)
Frejlich, Pedro; Mărcuț, Ioan
2018-03-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.
Frejlich, Pedro; Mărcuț, Ioan
2018-01-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Angiogenic Signaling in Living Breast Tumor Models
2007-06-01
Poisson distributed random noise is added in an amount relative to the desired signal to noise ratio. We fit the data using a regressive fitting...AD_________________ Award Number: W81XWH-05-1-0396 TITLE: Angiogenic Signaling in Living Breast...CONTRACT NUMBER Angiogenic Signaling in Living Breast Tumor Models 5b. GRANT NUMBER W81XWH-05-1-0396 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
2018-01-12
sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the
Equilibrium structures of carbon diamond-like clusters and their elastic properties
NASA Astrophysics Data System (ADS)
Lisovenko, D. S.; Baimova, Yu. A.; Rysaeva, L. Kh.; Gorodtsov, V. A.; Dmitriev, S. V.
2017-04-01
Three-dimensional carbon diamond-like phases consisting of sp 3-hybridized atoms, obtained by linking of carcasses of fullerene-like molecules, are studied by methods of molecular dynamics modeling. For eight cubic and one hexagonal diamond-like phases on the basis of four types of fullerene-like molecules, equilibrium configurations are found and the elastic constants are calculated. The results obtained by the method of molecular dynamics are used for analytical calculations of the elastic characteristics of the diamond- like phases with the cubic and hexagonal anisotropy. It is found that, for a certain choice of the dilatation axis, three of these phases have negative Poisson's ratio, i.e., are partial auxetics. The variability of the engineering elasticity coefficients (Young's modulus, Poisson's ratio, shear modulus, and bulk modulus) is analyzed.
Hyperuniformity Length in Experimental Foam and Simulated Point Patterns
NASA Astrophysics Data System (ADS)
Chieco, Anthony; Roth, Adam; Dreyfus, Remi; Torquato, Salvatore; Durian, Douglas
2015-03-01
Systems without long-wavelength number density fluctuations are called hyperuniform (HU). The degree to which a point pattern is HU may be tested in terms of the variance in the number of points inside randomly placed boxes of side length L. If HU then the variance is due solely to fluctuations near the boundary rather than throughout the entire volume of the box. To make this concrete we introduce a hyperuniformity length h, equal to the width of the boundary where number fluctuations occur. Thus h helps characterize the disorder. We show how to deduce h from the number variance, and we do so for Poisson and Einstein patterns plus those made by the vertices and bubble centroids in 2d foams. A Poisson pattern is one where points are totally random. These are not HU and h equals L/2. We coin ``Einstein patterns'' to be where points in a lattice are independently displaced from their site by a normally distributed amount. These are HU and h equals the RMS displacement from the lattice sites. Bubble centroids and vertices are both HU. For these, h is less than L/2 and increases slower than linear in L. The centroids are more HU than the vertices, in that h that increases more slowly.
Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B
2018-05-30
Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Witte, Susan S; Aira, Toivgoo; Tsai, Laura Cordisco; Riedel, Marion; Offringa, Reid; Chang, Mingway; El-Bassel, Nabila; Ssewamala, Fred
2015-03-01
We tested whether a structural intervention combining savings-led microfinance and HIV prevention components would achieve enhanced reductions in sexual risk among women engaging in street-based sex work in Ulaanbaatar, Mongolia, compared with an HIV prevention intervention alone. Between November 2011 and August 2012, we randomized 107 eligible women who completed baseline assessments to either a 4-session HIV sexual risk reduction intervention (HIVSRR) alone (n=50) or a 34-session HIVSRR plus a savings-led microfinance intervention (n=57). At 3- and 6-month follow-up assessments, participants reported unprotected acts of vaginal intercourse with paying partners and number of paying partners with whom they engaged in sexual intercourse in the previous 90 days. Using Poisson and zero-inflated Poisson model regressions, we examined the effects of assignment to treatment versus control condition on outcomes. At 6-month follow-up, the HIVSRR plus microfinance participants reported significantly fewer paying sexual partners and were more likely to report zero unprotected vaginal sex acts with paying sexual partners. Findings advance the HIV prevention repertoire for women, demonstrating that risk reduction may be achieved through a structural intervention that relies on asset building, including savings, and alternatives to income from sex work.
Prediction of road accidents: A Bayesian hierarchical approach.
Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H
2013-03-01
In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Tartakovsky, Alexandre M.
This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less
Universal self-similarity of propagating populations
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-07-01
This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.
Universal self-similarity of propagating populations.
Eliazar, Iddo; Klafter, Joseph
2010-07-01
This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.
NASA Technical Reports Server (NTRS)
Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.
1992-01-01
The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.
The influence of climate variables on dengue in Singapore.
Pinto, Edna; Coelho, Micheline; Oliver, Leuda; Massad, Eduardo
2011-12-01
In this work we correlated dengue cases with climatic variables for the city of Singapore. This was done through a Poisson Regression Model (PRM) that considers dengue cases as the dependent variable and the climatic variables (rainfall, maximum and minimum temperature and relative humidity) as independent variables. We also used Principal Components Analysis (PCA) to choose the variables that influence in the increase of the number of dengue cases in Singapore, where PC₁ (Principal component 1) is represented by temperature and rainfall and PC₂ (Principal component 2) is represented by relative humidity. We calculated the probability of occurrence of new cases of dengue and the relative risk of occurrence of dengue cases influenced by climatic variable. The months from July to September showed the highest probabilities of the occurrence of new cases of the disease throughout the year. This was based on an analysis of time series of maximum and minimum temperature. An interesting result was that for every 2-10°C of variation of the maximum temperature, there was an average increase of 22.2-184.6% in the number of dengue cases. For the minimum temperature, we observed that for the same variation, there was an average increase of 26.1-230.3% in the number of the dengue cases from April to August. The precipitation and the relative humidity, after analysis of correlation, were discarded in the use of Poisson Regression Model because they did not present good correlation with the dengue cases. Additionally, the relative risk of the occurrence of the cases of the disease under the influence of the variation of temperature was from 1.2-2.8 for maximum temperature and increased from 1.3-3.3 for minimum temperature. Therefore, the variable temperature (maximum and minimum) was the best predictor for the increased number of dengue cases in Singapore.
Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets of NAND Flash Memory
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Ladbury, Raymond; Kim, Hak; Phan, Anthony; Seidleck, Christina; LaBel, Kenneth
2016-01-01
We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found the single-event upset (SEU) cross section varied inversely with fluence. The SEU cross section decreased with increasing fluence. We attribute the effect to the variable upset sensitivities of the memory cells. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, heavy ion irradiation of devices with variable upset sensitivity distribution using typical fluence levels may underestimate the cross section and on-orbit event rate.
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Twomey, Conal D.; Prince, Martin; Cieza, Alarcos; Baldwin, David S.; Prina, A. Matthew
2015-01-01
Background: Comprehensive understanding of the determinants of health service use (HSU) by older people with depression is essential for health service planning for an ageing global population. This study aimed to determine the extent to which depressive symptom severity and functioning are associated with HSU by older people with depression in low and middle income countries (LMICs). Methods: A cross-sectional analysis of the 10/66 Dementia Research Group population-based surveys dataset. Participants (n = 4590) were those aged 65 or older, in the clinical range for depressive symptoms (defined as scoring four or more on the EURO-D), living in 13 urban and/or rural catchment areas in nine LMICs. Associations were calculated using Poisson regression and random-effects meta-analysis. Results: After adjustment for confounding variables, (EURO-D) depressive symptom severity was significantly associated with “any community HSU” (Pooled Prevalence Ratios = 1.02; 95% CI = 1.01–1.03) but not hospital admission. Conversely, after adjustment, (WHODAS-II) functioning was significantly associated with hospital admission (Pooled PR = 1.14; 95% CI = 1.02–1.26) but not “any community HSU”. Conclusions: Depressive symptom severity does not explain a large proportion of the variance in HSU by older people with depression in LMICs. The association of functioning with this HSU is worthy of further investigation. In LMICs, variables related to accessibility may be more important correlates of HSU than variables directly related to health problems. PMID:25849540
Spatiotemporal variation in heat-related out-of-hospital cardiac arrest during the summer in Japan.
Onozuka, Daisuke; Hagihara, Akihito
2017-04-01
Although several studies have reported the impacts of extremely high temperature on cardiovascular diseases, few studies have investigated the spatiotemporal variation in the incidence of out-of-hospital cardiac arrest (OHCA) due to extremely high temperature in Japan. Daily OHCA data from 2005 to 2014 were acquired from all 47 prefectures of Japan. We used time-series Poisson regression analysis combined with a distributed lag non-linear model to assess the temporal variability in the effects of extremely high temperature on OHCA incidence in each prefecture, adjusted for time trends. Spatial variability in the relationships between extremely high temperature and OHCA between prefectures was estimated using a multivariate random-effects meta-analysis. We analyzed 166,496 OHCA cases of presumed cardiac origin occurring during the summer (June to September) that met the inclusion criteria. The minimum morbidity percentile (MMP) was the 51st percentile of temperature during the summer in Japan. The overall cumulative relative risk at the 99th percentile vs. the MMP over lags 0-10days was 1.21 (95% CI: 1.12-1.31). There was also a strong low temperature effect during the summer periods. No substantial difference in spatial or temporal variability was observed over the study period. Our study demonstrated spatiotemporal homogeneity in the risk of OHCA during periods of extremely high temperature between 2005 and 2014 in Japan. Our findings suggest that public health strategies for OHCA due to extremely high temperatures should be finely adjusted and should particularly account for the unchanging risk during the summer. Copyright © 2017 Elsevier B.V. All rights reserved.
Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds
NASA Astrophysics Data System (ADS)
Martínez-Torres, David; Miranda, Eva
2018-01-01
We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.
Time fluctuation analysis of forest fire sequences
NASA Astrophysics Data System (ADS)
Vega Orozco, Carmen D.; Kanevski, Mikhaïl; Tonini, Marj; Golay, Jean; Pereira, Mário J. G.
2013-04-01
Forest fires are complex events involving both space and time fluctuations. Understanding of their dynamics and pattern distribution is of great importance in order to improve the resource allocation and support fire management actions at local and global levels. This study aims at characterizing the temporal fluctuations of forest fire sequences observed in Portugal, which is the country that holds the largest wildfire land dataset in Europe. This research applies several exploratory data analysis measures to 302,000 forest fires occurred from 1980 to 2007. The applied clustering measures are: Morisita clustering index, fractal and multifractal dimensions (box-counting), Ripley's K-function, Allan Factor, and variography. These algorithms enable a global time structural analysis describing the degree of clustering of a point pattern and defining whether the observed events occur randomly, in clusters or in a regular pattern. The considered methods are of general importance and can be used for other spatio-temporal events (i.e. crime, epidemiology, biodiversity, geomarketing, etc.). An important contribution of this research deals with the analysis and estimation of local measures of clustering that helps understanding their temporal structure. Each measure is described and executed for the raw data (forest fires geo-database) and results are compared to reference patterns generated under the null hypothesis of randomness (Poisson processes) embedded in the same time period of the raw data. This comparison enables estimating the degree of the deviation of the real data from a Poisson process. Generalizations to functional measures of these clustering methods, taking into account the phenomena, were also applied and adapted to detect time dependences in a measured variable (i.e. burned area). The time clustering of the raw data is compared several times with the Poisson processes at different thresholds of the measured function. Then, the clustering measure value depends on the threshold which helps to understand the time pattern of the studied events. Our findings detected the presence of overdensity of events in particular time periods and showed that the forest fire sequences in Portugal can be considered as a multifractal process with a degree of time-clustering of the events. Key words: time sequences, Morisita index, fractals, multifractals, box-counting, Ripley's K-function, Allan Factor, variography, forest fires, point process. Acknowledgements This work was partly supported by the SNFS Project No. 200021-140658, "Analysis and Modelling of Space-Time Patterns in Complex Regions". References - Kanevski M. (Editor). 2008. Advanced Mapping of Environmental Data: Geostatistics, Machine Learning and Bayesian Maximum Entropy. London / Hoboken: iSTE / Wiley. - Telesca L. and Pereira M.G. 2010. Time-clustering investigation of fire temporal fluctuations in Portugal, Nat. Hazards Earth Syst. Sci., vol. 10(4): 661-666. - Vega Orozco C., Tonini M., Conedera M., Kanevski M. (2012) Cluster recognition in spatial-temporal sequences: the case of forest fires, Geoinformatica, vol. 16(4): 653-673.
Assessing historical rate changes in global tsunami occurrence
Geist, E.L.; Parsons, T.
2011-01-01
The global catalogue of tsunami events is examined to determine if transient variations in tsunami rates are consistent with a Poisson process commonly assumed for tsunami hazard assessments. The primary data analyzed are tsunamis with maximum sizes >1m. The record of these tsunamis appears to be complete since approximately 1890. A secondary data set of tsunamis >0.1m is also analyzed that appears to be complete since approximately 1960. Various kernel density estimates used to determine the rate distribution with time indicate a prominent rate change in global tsunamis during the mid-1990s. Less prominent rate changes occur in the early- and mid-20th century. To determine whether these rate fluctuations are anomalous, the distribution of annual event numbers for the tsunami catalogue is compared to Poisson and negative binomial distributions, the latter of which includes the effects of temporal clustering. Compared to a Poisson distribution, the negative binomial distribution model provides a consistent fit to tsunami event numbers for the >1m data set, but the Poisson null hypothesis cannot be falsified for the shorter duration >0.1m data set. Temporal clustering of tsunami sources is also indicated by the distribution of interevent times for both data sets. Tsunami event clusters consist only of two to four events, in contrast to protracted sequences of earthquakes that make up foreshock-main shock-aftershock sequences. From past studies of seismicity, it is likely that there is a physical triggering mechanism responsible for events within the tsunami source 'mini-clusters'. In conclusion, prominent transient rate increases in the occurrence of global tsunamis appear to be caused by temporal grouping of geographically distinct mini-clusters, in addition to the random preferential location of global M >7 earthquakes along offshore fault zones.
2010-01-01
Background It is a priority to achieve smoking cessation in diabetic smokers, given that this is a group of patients with elevated cardiovascular risk. Furthermore, tobacco has a multiplying effect on micro and macro vascular complications. Smoking abstinence rates increase as the intensity of the intervention, length of the intervention and number and diversity of contacts with the healthcare professional during the intervention increases. However, there are few published studies about smoking cessation in diabetics in primary care, a level of healthcare that plays an essential role in these patients. Therefore, the aim of the present study is to evaluate the effectiveness of an intensive smoking cessation intervention in diabetic patients in primary care. Methods/Design Cluster randomized trial, controlled and multicentric. Randomization unit: Primary Care Team. Study population: 546 diabetic smokers older than 14 years of age whose disease is controlled by one of the primary care teams in the study. Outcome Measures: Continuous tobacco abstinence (a person who has not smoked for at least six months and with a CO level of less than 6 ppm measured by a cooximeter) , evolution in the Prochaska and DiClemente's Transtheoretical Model of Change, number of cigarettes/day, length of the visit. Point of assessment: one- year post- inclusion in the study. Intervention: Brief motivational interview for diabetic smokers at the pre-contemplation and contemplation stage, intensive motivational interview with pharmacotherapy for diabetic smokers in the preparation-action stage and reinforcing intevention in the maintenance stage. Statistical Analysis: A descriptive analysis of all variables will be done, as well as a multilevel logistic regression and a Poisson regression. All analyses will be done with an intention to treatment basis and will be fitted for potential confounding factors and variables of clinical importance. Statistical packages: SPSS15, STATA10 y HLM6. Discussion The present study will try to describe the profile of a diabetic smoker who receives the most benefit from an intensive intervention in primary care. The results will be useful for primary care professionals in their usual clinical practice. Trial Registration Clinical Trials.gov Identifier: NCT00954967 PMID:20132540
Roig, Lydia; Perez, Santiago; Prieto, Gemma; Martin, Carlos; Advani, Mamta; Armengol, Angelina; Roura, Pilar; Manresa, Josep Maria; Briones, Elena
2010-02-04
It is a priority to achieve smoking cessation in diabetic smokers, given that this is a group of patients with elevated cardiovascular risk. Furthermore, tobacco has a multiplying effect on micro and macro vascular complications. Smoking abstinence rates increase as the intensity of the intervention, length of the intervention and number and diversity of contacts with the healthcare professional during the intervention increases. However, there are few published studies about smoking cessation in diabetics in primary care, a level of healthcare that plays an essential role in these patients. Therefore, the aim of the present study is to evaluate the effectiveness of an intensive smoking cessation intervention in diabetic patients in primary care. Cluster randomized trial, controlled and multicentric. Randomization unit: Primary Care Team. 546 diabetic smokers older than 14 years of age whose disease is controlled by one of the primary care teams in the study. Continuous tobacco abstinence (a person who has not smoked for at least six months and with a CO level of less than 6 ppm measured by a cooximeter) , evolution in the Prochaska and DiClemente's Transtheoretical Model of Change, number of cigarettes/day, length of the visit. Point of assessment: one- year post- inclusion in the study. Brief motivational interview for diabetic smokers at the pre-contemplation and contemplation stage, intensive motivational interview with pharmacotherapy for diabetic smokers in the preparation-action stage and reinforcing intevention in the maintenance stage. A descriptive analysis of all variables will be done, as well as a multilevel logistic regression and a Poisson regression. All analyses will be done with an intention to treatment basis and will be fitted for potential confounding factors and variables of clinical importance. Statistical packages: SPSS15, STATA10 y HLM6. The present study will try to describe the profile of a diabetic smoker who receives the most benefit from an intensive intervention in primary care. The results will be useful for primary care professionals in their usual clinical practice. Clinical Trials.gov Identifier: NCT00954967.
The effect of vaccination coverage and climate on Japanese encephalitis in Sarawak, Malaysia.
Impoinvil, Daniel E; Ooi, Mong How; Diggle, Peter J; Caminade, Cyril; Cardosa, Mary Jane; Morse, Andrew P; Baylis, Matthew; Solomon, Tom
2013-01-01
Japanese encephalitis (JE) is the leading cause of viral encephalitis across Asia with approximately 70,000 cases a year and 10,000 to 15,000 deaths. Because JE incidence varies widely over time, partly due to inter-annual climate variability effects on mosquito vector abundance, it becomes more complex to assess the effects of a vaccination programme since more or less climatically favourable years could also contribute to a change in incidence post-vaccination. Therefore, the objective of this study was to quantify vaccination effect on confirmed Japanese encephalitis (JE) cases in Sarawak, Malaysia after controlling for climate variability to better understand temporal dynamics of JE virus transmission and control. Monthly data on serologically confirmed JE cases were acquired from Sibu Hospital in Sarawak from 1997 to 2006. JE vaccine coverage (non-vaccine years vs. vaccine years) and meteorological predictor variables, including temperature, rainfall and the Southern Oscillation index (SOI) were tested for their association with JE cases using Poisson time series analysis and controlling for seasonality and long-term trend. Over the 10-years surveillance period, 133 confirmed JE cases were identified. There was an estimated 61% reduction in JE risk after the introduction of vaccination, when no account is taken of the effects of climate. This reduction is only approximately 45% when the effects of inter-annual variability in climate are controlled for in the model. The Poisson model indicated that rainfall (lag 1-month), minimum temperature (lag 6-months) and SOI (lag 6-months) were positively associated with JE cases. This study provides the first improved estimate of JE reduction through vaccination by taking account of climate inter-annual variability. Our analysis confirms that vaccination has substantially reduced JE risk in Sarawak but this benefit may be overestimated if climate effects are ignored.
Hierarchical Bayesian spatial models for alcohol availability, drug "hot spots" and violent crime.
Zhu, Li; Gorman, Dennis M; Horel, Scott
2006-12-07
Ecologic studies have shown a relationship between alcohol outlet densities, illicit drug use and violence. The present study examined this relationship in the City of Houston, Texas, using a sample of 439 census tracts. Neighborhood sociostructural covariates, alcohol outlet density, drug crime density and violent crime data were collected for the year 2000, and analyzed using hierarchical Bayesian models. Model selection was accomplished by applying the Deviance Information Criterion. The counts of violent crime in each census tract were modelled as having a conditional Poisson distribution. Four neighbourhood explanatory variables were identified using principal component analysis. The best fitted model was selected as the one considering both unstructured and spatial dependence random effects. The results showed that drug-law violation explained a greater amount of variance in violent crime rates than alcohol outlet densities. The relative risk for drug-law violation was 2.49 and that for alcohol outlet density was 1.16. Of the neighbourhood sociostructural covariates, males of age 15 to 24 showed an effect on violence, with a 16% decrease in relative risk for each increase the size of its standard deviation. Both unstructured heterogeneity random effect and spatial dependence need to be included in the model. The analysis presented suggests that activity around illicit drug markets is more strongly associated with violent crime than is alcohol outlet density. Unique among the ecological studies in this field, the present study not only shows the direction and magnitude of impact of neighbourhood sociostructural covariates as well as alcohol and illicit drug activities in a neighbourhood, it also reveals the importance of applying hierarchical Bayesian models in this research field as both spatial dependence and heterogeneity random effects need to be considered simultaneously.
Lamm, Steven H; Robbins, Shayhan A; Zhou, Chao; Lu, Jun; Chen, Rusan; Feinleib, Manning
2013-02-01
To examine the analytic role of arsenic exposure on cancer mortality among the low-dose (well water arsenic level <150 μg/L) villages in the Blackfoot-disease (BFD) endemic area of southwest Taiwan and with respect to the southwest regional data. Poisson analyses of the bladder and lung cancer deaths with respect to arsenic exposure (μg/kg/day) for the low-dose (<150 μg/L) villages with exposure defined by the village median, mean, or maximum and with or without regional data. Use of the village median well water arsenic level as the exposure metric introduced misclassification bias by including villages with levels >500 μg/L, but use of the village mean or the maximum did not. Poisson analyses using mean or maximum arsenic levels showed significant negative cancer slope factors for models of bladder cancers and of bladder and lung cancers combined. Inclusion of the southwest Taiwan regional data did not change the findings when the model contained an explanatory variable for non-arsenic differences. A positive slope could only be generated by including the comparison population as a separate data point with the assumption of zero arsenic exposure from drinking water and eliminating the variable for non-arsenic risk factors. The cancer rates are higher among the low-dose (<150 μg/L) villages in the BFD area than in the southwest Taiwan region. However, among the low-dose villages in the BFD area, cancer risks suggest a negative association with well water arsenic levels. Positive differences from regional data seem attributable to non-arsenic ecological factors. Copyright © 2012 Elsevier Inc. All rights reserved.
Bonate, Peter L; Sung, Crystal; Welch, Karen; Richards, Susan
2009-10-01
Patients that are exposed to biotechnology-derived therapeutics often develop antibodies to the therapeutic, the magnitude of which is assessed by measuring antibody titers. A statistical approach for analyzing antibody titer data conditional on seroconversion is presented. The proposed method is to first transform the antibody titer data based on a geometric series using a common ratio of 2 and a scale factor of 50 and then analyze the exponent using a zero-inflated or hurdle model assuming a Poisson or negative binomial distribution with random effects to account for patient heterogeneity. Patient specific covariates can be used to model the probability of developing an antibody response, i.e., seroconversion, as well as the magnitude of the antibody titer itself. The method was illustrated using antibody titer data from 87 male seroconverted Fabry patients receiving Fabrazyme. Titers from five clinical trials were collected over 276 weeks of therapy with anti-Fabrazyme IgG titers ranging from 100 to 409,600 after exclusion of seronegative patients. The best model to explain seroconversion was a zero-inflated Poisson (ZIP) model where cumulative dose (under a constant dose regimen of dosing every 2 weeks) influenced the probability of seroconversion. There was an 80% chance of seroconversion when the cumulative dose reached 210 mg (90% confidence interval: 194-226 mg). No difference in antibody titers was noted between Japanese or Western patients. Once seroconverted, antibody titers did not remain constant but decreased in an exponential manner from an initial magnitude to a new lower steady-state value. The expected titer after the new steady-state titer had been achieved was 870 (90% CI: 630-1109). The half-life to the new steady-state value after seroconversion was 44 weeks (90% CI: 17-70 weeks). Time to seroconversion did not appear to be correlated with titer at the time of seroconversion. The method can be adequately used to model antibody titer data.
Normal and compound poisson approximations for pattern occurrences in NGS reads.
Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu
2012-06-01
Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).
Generalized master equation via aging continuous-time random walks.
Allegrini, Paolo; Aquino, Gerardo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo
2003-11-01
We discuss the problem of the equivalence between continuous-time random walk (CTRW) and generalized master equation (GME). The walker, making instantaneous jumps from one site of the lattice to another, resides in each site for extended times. The sojourn times have a distribution density psi(t) that is assumed to be an inverse power law with the power index micro. We assume that the Onsager principle is fulfilled, and we use this assumption to establish a complete equivalence between GME and the Montroll-Weiss CTRW. We prove that this equivalence is confined to the case where psi(t) is an exponential. We argue that is so because the Montroll-Weiss CTRW, as recently proved by Barkai [E. Barkai, Phys. Rev. Lett. 90, 104101 (2003)], is nonstationary, thereby implying aging, while the Onsager principle is valid only in the case of fully aged systems. The case of a Poisson distribution of sojourn times is the only one with no aging associated to it, and consequently with no need to establish special initial conditions to fulfill the Onsager principle. We consider the case of a dichotomous fluctuation, and we prove that the Onsager principle is fulfilled for any form of regression to equilibrium provided that the stationary condition holds true. We set the stationary condition on both the CTRW and the GME, thereby creating a condition of total equivalence, regardless of the nature of the waiting-time distribution. As a consequence of this procedure we create a GME that is a bona fide master equation, in spite of being non-Markov. We note that the memory kernel of the GME affords information on the interaction between system of interest and its bath. The Poisson case yields a bath with infinitely fast fluctuations. We argue that departing from the Poisson form has the effect of creating a condition of infinite memory and that these results might be useful to shed light on the problem of how to unravel non-Markov quantum master equations.
Population activity statistics dissect subthreshold and spiking variability in V1.
Bányai, Mihály; Koman, Zsombor; Orbán, Gergő
2017-07-01
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.
Detection of Answer Copying Based on the Structure of a High-Stakes Test
ERIC Educational Resources Information Center
Belov, Dmitry I.
2011-01-01
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
REJEKI, Dwi Sarwani Sri; NURHAYATI, Nunung; AJI, Budi; MURHANDARWATI, E. Elsa Herdiana; KUSNANTO, Hari
2018-01-01
Background: Climatic and weather factors become important determinants of vector-borne diseases transmission like malaria. This study aimed to prove relationships between weather factors with considering human migration and previous case findings and malaria cases in endemic areas in Purworejo during 2005–2014. Methods: This study employed ecological time series analysis by using monthly data. The independent variables were the maximum temperature, minimum temperature, maximum humidity, minimum humidity, precipitation, human migration, and previous malaria cases, while the dependent variable was positive malaria cases. Three models of count data regression analysis i.e. Poisson model, quasi-Poisson model, and negative binomial model were applied to measure the relationship. The least Akaike Information Criteria (AIC) value was also performed to find the best model. Negative binomial regression analysis was considered as the best model. Results: The model showed that humidity (lag 2), precipitation (lag 3), precipitation (lag 12), migration (lag1) and previous malaria cases (lag 12) had a significant relationship with malaria cases. Conclusion: Weather, migration and previous malaria cases factors need to be considered as prominent indicators for the increase of malaria case projection. PMID:29900134
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.
A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Distribution of shortest cycle lengths in random networks
NASA Astrophysics Data System (ADS)
Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan
2017-12-01
We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.
Implementing the DC Mode in Cosmological Simulations with Supercomoving Variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y; Kravtsov, Andrey V; Rudd, Douglas H
2011-06-02
As emphasized by previous studies, proper treatment of the density fluctuation on the fundamental scale of a cosmological simulation volume - the 'DC mode' - is critical for accurate modeling of spatial correlations on scales ~> 10% of simulation box size. We provide further illustration of the effects of the DC mode on the abundance of halos in small boxes and show that it is straightforward to incorporate this mode in cosmological codes that use the 'supercomoving' variables. The equations governing evolution of dark matter and baryons recast with these variables are particularly simple and include the expansion factor, andmore » hence the effect of the DC mode, explicitly only in the Poisson equation.« less
Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets in a NAND Flash Memory
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Ladbury, Raymond L.; Kim, Hak; Phan, Anthony; Seidleck, Christina; Label, Kenneth
2016-01-01
We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found that the single-event upset (SEU) cross section varied inversely with cumulative fluence. We attribute the effect to the variable upset sensitivities of the memory cells. Furthermore, the effect impacts only single cell upsets in general. The rate of multiple-bit upsets remained relatively constant with fluence. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, traditional SEE testing techniques may underestimate the on-orbit event rate for a device with variable upset sensitivity.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Method for resonant measurement
Rhodes, G.W.; Migliori, A.; Dixon, R.D.
1996-03-05
A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.
The effect of dissipative inhomogeneous medium on the statistics of the wave intensity
NASA Technical Reports Server (NTRS)
Saatchi, Sasan S.
1993-01-01
One of the main theoretical points in the theory of wave propagation in random medium is the derivation of closed form equations to describe the statistics of the propagating waves. In particular, in one dimensional problems, the closed form representation of the multiple scattering effects is important since it contributes in understanding such problems like wave localization, backscattering enhancement, and intensity fluctuations. In this the propagation of plane waves in a layer of one-dimensional dissipative random medium is considered. The medium is modeled by a complex permittivity whose real part is a constant representing the absorption. The one dimensional problem is mathematically equivalent to the analysis of a transmission line with randomly perturbed distributed parameters and a single mode lossy waveguide and the results can be used to study the propagation of radio waves through atmosphere and the remote sensing of geophysical media. It is assumed the scattering medium consists of an ensemble of one-dimensional point scatterers randomly positioned in a layer of thickness L with diffuse boundaries. A Poisson impulse process with density lambda is used to model the position of scatterers in the medium. By employing the Markov properties of this process an exact closed form equation of Kolmogorov-Feller type was obtained for the probability density of the reflection coefficient. This equation was solved by combining two limiting cases: (1) when the density of scatterers is small; and (2) when the medium is weakly dissipative. A two variable perturbation method for small lambda was used to obtain solutions valid for thick layers. These solutions are then asymptotically evaluated for small dissipation. To show the effect of dissipation, the mean and fluctuations of the reflected power are obtained. The results were compared with a lossy homogeneous medium and with a lossless inhomogeneous medium and the regions where the effect of absorption is not essential were discussed.
Radio Occultation Investigation of the Rings of Saturn and Uranus
NASA Technical Reports Server (NTRS)
Marouf, Essam A.
1997-01-01
The proposed work addresses two main objectives: (1) to pursue the development of the random diffraction screen model for analytical/computational characterization of the extinction and near-forward scattering by ring models that include particle crowding, uniform clustering, and clustering along preferred orientations (anisotropy). The characterization is crucial for proper interpretation of past (Voyager) and future (Cassini) ring, occultation observations in terms of physical ring properties, and is needed to address outstanding puzzles in the interpretation of the Voyager radio occultation data sets; (2) to continue the development of spectral analysis techniques to identify and characterize the power scattered by all features of Saturn's rings that can be resolved in the Voyager radio occultation observations, and to use the results to constrain the maximum particle size and its abundance. Characterization of the variability of surface mass density among the main ring, features and within individual features is important for constraining the ring mass and is relevant to investigations of ring dynamics and origin. We completed the developed of the stochastic geometry (random screen) model for the interaction of electromagnetic waves with of planetary ring models; used the model to relate the oblique optical depth and the angular spectrum of the near forward scattered signal to statistical averages of the stochastic geometry of the randomly blocked area. WE developed analytical results based on the assumption of Poisson statistics for particle positions, and investigated the dependence of the oblique optical depth and angular spectrum on the fractional area blocked, vertical ring profile, and incidence angle when the volume fraction is small. Demonstrated agreement with the classical radiative transfer predictions for oblique incidence. Also developed simulation procedures to generate statistical realizations of random screens corresponding to uniformly packed ring models, and used the results to characterize dependence of the extinction and near-forward scattering on ring thickness, packing fraction, and the ring opening angle.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Lord, Dominique; Washington, Simon P; Ivan, John N
2005-01-01
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
NASA Astrophysics Data System (ADS)
Lin, Kevin K.; Young, Lai-Sang
2008-05-01
Guided by a geometric understanding developed in earlier works of Wang and Young, we carry out numerical studies of shear-induced chaos in several parallel but different situations. The settings considered include periodic kicking of limit cycles, random kicks at Poisson times and continuous-time driving by white noise. The forcing of a quasi-periodic model describing two coupled oscillators is also investigated. In all cases, positive Lyapunov exponents are found in suitable parameter ranges when the forcing is suitably directed.
Aira, Toivgoo; Tsai, Laura Cordisco; Riedel, Marion; Offringa, Reid; Chang, Mingway; El-Bassel, Nabila; Ssewamala, Fred
2015-01-01
Objectives. We tested whether a structural intervention combining savings-led microfinance and HIV prevention components would achieve enhanced reductions in sexual risk among women engaging in street-based sex work in Ulaanbaatar, Mongolia, compared with an HIV prevention intervention alone. Methods. Between November 2011 and August 2012, we randomized 107 eligible women who completed baseline assessments to either a 4-session HIV sexual risk reduction intervention (HIVSRR) alone (n = 50) or a 34-session HIVSRR plus a savings-led microfinance intervention (n = 57). At 3- and 6-month follow-up assessments, participants reported unprotected acts of vaginal intercourse with paying partners and number of paying partners with whom they engaged in sexual intercourse in the previous 90 days. Using Poisson and zero-inflated Poisson model regressions, we examined the effects of assignment to treatment versus control condition on outcomes. Results. At 6-month follow-up, the HIVSRR plus microfinance participants reported significantly fewer paying sexual partners and were more likely to report zero unprotected vaginal sex acts with paying sexual partners. Conclusions. Findings advance the HIV prevention repertoire for women, demonstrating that risk reduction may be achieved through a structural intervention that relies on asset building, including savings, and alternatives to income from sex work. PMID:25602889
Properties of the Bivariate Delayed Poisson Process
1974-07-01
and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a
Optimal linear reconstruction of dark matter from halo catalogues
Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.
2011-04-01
The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less
Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung
NASA Astrophysics Data System (ADS)
Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani
2017-03-01
Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.
Cascades of Particles Moving at Finite Velocity in Hyperbolic Spaces
NASA Astrophysics Data System (ADS)
Cammarota, V.; Orsingher, E.
2008-12-01
A branching process of particles moving at finite velocity over the geodesic lines of the hyperbolic space (Poincaré half-plane and Poincaré disk) is examined. Each particle can split into two particles only once at Poisson spaced times and deviates orthogonally when splitted. At time t, after N( t) Poisson events, there are N( t)+1 particles moving along different geodesic lines. We are able to obtain the exact expression of the mean hyperbolic distance of the center of mass of the cloud of particles. We derive such mean hyperbolic distance from two different and independent ways and we study the behavior of the relevant expression as t increases and for different values of the parameters c (hyperbolic velocity of motion) and λ (rate of reproduction). The mean hyperbolic distance of each moving particle is also examined and a useful representation, as the distance of a randomly stopped particle moving over the main geodesic line, is presented.
Percolation and Reinforcement on Complex Networks
NASA Astrophysics Data System (ADS)
Yuan, Xin
Complex networks appear in almost every aspect of our daily life and are widely studied in the fields of physics, mathematics, finance, biology and computer science. This work utilizes percolation theory in statistical physics to explore the percolation properties of complex networks and develops a reinforcement scheme on improving network resilience. This dissertation covers two major parts of my Ph.D. research on complex networks: i) probe--in the context of both traditional percolation and k-core percolation--the resilience of complex networks with tunable degree distributions or directed dependency links under random, localized or targeted attacks; ii) develop and propose a reinforcement scheme to eradicate catastrophic collapses that occur very often in interdependent networks. We first use generating function and probabilistic methods to obtain analytical solutions to percolation properties of interest, such as the giant component size and the critical occupation probability. We study uncorrelated random networks with Poisson, bi-Poisson, power-law, and Kronecker-delta degree distributions and construct those networks which are based on the configuration model. The computer simulation results show remarkable agreement with theoretical predictions. We discover an increase of network robustness as the degree distribution broadens and a decrease of network robustness as directed dependency links come into play under random attacks. We also find that targeted attacks exert the biggest damage to the structure of both single and interdependent networks in k-core percolation. To strengthen the resilience of interdependent networks, we develop and propose a reinforcement strategy and obtain the critical amount of reinforced nodes analytically for interdependent Erdḧs-Renyi networks and numerically for scale-free and for random regular networks. Our mechanism leads to improvement of network stability of the West U.S. power grid. This dissertation provides us with a deeper understanding of the effects of structural features on network stability and fresher insights into designing resilient interdependent infrastructure networks.
Hamiltonian approach to continuum dynamics
NASA Astrophysics Data System (ADS)
Isaev, A. A.; Kovalevskii, M. Yu.; Peletminskii, S. V.
1995-02-01
A study is made of the problem of obtaining the Poisson-bracket algebra of the dynamical variables of continuous media on the basis of specification of the kinematic part of the Lagrangian in terms of generalized coordinates and momenta. Within this algebra, subalgebras of variables corresponding to the description of elastic media, the hydrodynamics of ordinary liquids, and the dynamics of some phases of liquid crystals are identified. The differential conservation laws associated with the symmetries of the Hamiltonian of the system are studied. The dynamics of nematics is considered, and features of the dynamics of the cholesteric, smectic, and discotic phases are noted.
Assessing model uncertainty using hexavalent chromium and ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47
A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.
Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique
2018-01-01
This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.
A modified Poisson-Boltzmann equation applied to protein adsorption.
Gama, Marlon de Souza; Santos, Mirella Simões; Lima, Eduardo Rocha de Almeida; Tavares, Frederico Wanderley; Barreto, Amaro Gomes Barreto
2018-01-05
Ion-exchange chromatography has been widely used as a standard process in purification and analysis of protein, based on the electrostatic interaction between the protein and the stationary phase. Through the years, several approaches are used to improve the thermodynamic description of colloidal particle-surface interaction systems, however there are still a lot of gaps specifically when describing the behavior of protein adsorption. Here, we present an improved methodology for predicting the adsorption equilibrium constant by solving the modified Poisson-Boltzmann (PB) equation in bispherical coordinates. By including dispersion interactions between ions and protein, and between ions and surface, the modified PB equation used can describe the Hofmeister effects. We solve the modified Poisson-Boltzmann equation to calculate the protein-surface potential of mean force, treated as spherical colloid-plate system, as a function of process variables. From the potential of mean force, the Henry constants of adsorption, for different proteins and surfaces, are calculated as a function of pH, salt concentration, salt type, and temperature. The obtained Henry constants are compared with experimental data for several isotherms showing excellent agreement. We have also performed a sensitivity analysis to verify the behavior of different kind of salts and the Hofmeister effects. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling the number of car theft using Poisson regression
NASA Astrophysics Data System (ADS)
Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura
2016-10-01
Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.
NASA Technical Reports Server (NTRS)
Hoots, F. R.; Fitzpatrick, P. M.
1979-01-01
The classical Poisson equations of rotational motion are used to study the attitude motions of an earth orbiting, rapidly spinning gyroscope perturbed by the effects of general relativity (Einstein theory). The center of mass of the gyroscope is assumed to move about a rotating oblate earth in an evolving elliptic orbit which includes all first-order oblateness effects produced by the earth. A method of averaging is used to obtain a transformation of variables, for the nonresonance case, which significantly simplifies the Poisson differential equations of motion of the gyroscope. Long-term solutions are obtained by an exact analytical integration of the simplified transformed equations. These solutions may be used to predict both the orientation of the gyroscope and the motion of its rotational angular momentum vector as viewed from its center of mass. The results are valid for all eccentricities and all inclinations not near the critical inclination.
NASA Astrophysics Data System (ADS)
Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan
2018-06-01
We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2017-08-01
This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.
Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry
NASA Technical Reports Server (NTRS)
Hong, Yie-Ming
1973-01-01
Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.
Particle motion around magnetized black holes: Preston-Poisson space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konoplya, R. A.
We analyze the motion of massless and massive particles around black holes immersed in an asymptotically uniform magnetic field and surrounded by some mechanical structure, which provides the magnetic field. The space-time is described by the Preston-Poisson metric, which is the generalization of the well-known Ernst metric with a new parameter, tidal force, characterizing the surrounding structure. The Hamilton-Jacobi equations allow the separation of variables in the equatorial plane. The presence of a tidal force from the surroundings considerably changes the parameters of the test particle motion: it increases the radius of circular orbits of particles and increases the bindingmore » energy of massive particles going from a given circular orbit to the innermost stable orbit near the black hole. In addition, it increases the distance of the minimal approach, time delay, and bending angle for a ray of light propagating near the black hole.« less
NASA Astrophysics Data System (ADS)
Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.
2017-09-01
Traffic safety is a major concern in the transportation industry due to immense monetary and emotional burden caused by crashes of various severity levels, especially the injury and fatality ones. To reduce such crashes on all public roads, the safety management processes are commonly implemented which include network screening, problem diagnosis, countermeasure identification, and project prioritization. The selection of countermeasures for potential mitigation of crashes is governed by the influential factors which impact roadway crashes. Crash prediction model is the tool widely adopted by safety practitioners or researchers to link various influential factors to crash occurrences. Many different approaches have been used in the past studies to develop better fitting models which also exhibit prediction accuracy. In this study, a crash prediction model is developed to investigate the vehicular crashes occurring at roadway segments. The spatial and temporal nature of crash data is exploited to form a spatiotemporal model which accounts for the different types of heterogeneities among crash data and geometric or traffic flow variables. This study utilizes the Poisson lognormal model with random effects, which can accommodate the yearly variations in explanatory variables and the spatial correlations among segments. The dependency of different factors linked with roadway geometric, traffic flow, and road surface type on vehicular crashes occurring at segments was established as the width of lanes, posted speed limit, nature of pavement, and AADT were found to be correlated with vehicle crashes.
Thogmartin, W.E.; Sauer, J.R.; Knutson, M.G.
2007-01-01
We used an over-dispersed Poisson regression with fixed and random effects, fitted by Markov chain Monte Carlo methods, to model population spatial patterns of relative abundance of American woodcock (Scolopax minor) across its breeding range in the United States. We predicted North American woodcock Singing Ground Survey counts with a log-linear function of explanatory variables describing habitat, year effects, and observer effects. The model also included a conditional autoregressive term representing potential correlation between adjacent route counts. Categories of explanatory habitat variables in the model included land-cover composition, climate, terrain heterogeneity, and human influence. Woodcock counts were higher in landscapes with more forest, especially aspen (Populus tremuloides) and birch (Betula spp.) forest, and in locations with a high degree of interspersion among forest, shrubs, and grasslands. Woodcock counts were lower in landscapes with a high degree of human development. The most noteworthy practical application of this spatial modeling approach was the ability to map predicted relative abundance. Based on a map of predicted relative abundance derived from the posterior parameter estimates, we identified major concentrations of woodcock abundance in east-central Minnesota, USA, the intersection of Vermont, USA, New York, USA, and Ontario, Canada, the upper peninsula of Michigan, USA, and St. Lawrence County, New York. The functional relations we elucidated for the American woodcock provide a basis for the development of management programs and the model and map may serve to focus management and monitoring on areas and habitat features important to American woodcock.
Ceschini, Fabio L; Andrade, Douglas R; Oliveira, Luis C; Araújo Júnior, Jorge F; Matsudo, Victor K R
2009-01-01
To describe the prevalence of physical inactivity and associated factors among high school students from state's public schools in the city of São Paulo, state of São Paulo, Brazil. Sixteen state's public schools were randomly selected according to the geographic areas of the city (North, South, East, and West). The sample consisted of 3,845 high school students in 2006. Physical inactivity was measured using the International Physical Activity Questionnaire (short IPAQ) and was defined as practicing moderate and/or vigorous physical activity for a period of less than 300 minutes per week. The independent variables analyzed were: gender, age, socioeconomic level, geographic area of the city, awareness of the Agita São Paulo program, participation in physical education classes, smoking, alcohol intake and time spent per day watching television. Three-level Poisson regression was used for assessing the variables, with a significance level of p < 0.05. The general prevalence of physical inactivity among adolescents in São Paulo was 62.5% (95%CI 60.5-64.1). The factors associated with physical inactivity were gender, age, socioeconomic level, geographic area of the city, awareness of the Agita São Paulo program, non-participation in physical education classes, smoking, alcohol intake and time spent per day watching television. It was concluded that the prevalence of physical inactivity among adolescents in São Paulo was high in all the geographic areas evaluated, and that sociodemographic and behavioral factors contributed significantly to physical inactivity.
Factors associated with caries: a survey of students from southern Brazil
Borges, Tássia Silvana; Schwanke, Natalí Lippert; Reuter, Cézane Priscila; Neto, Léo Kraether; Burgos, Miria Suzana
2016-01-01
Abstract Objective: To describe the factors associated with dental caries among students from Santa Cruz do Sul, Rio Grande do Sul, Brazil. Methods: A cross-sectional study was conducted in a random sample of 623 students of both genders, aged 10-17 years old. Tooth decay was performed using the index of the World Health Organization (1997), DMFT (permanent dentition) that expresses the sum of decayed, missing and filled teeth per person. The maternal educational level was rated using criteria of the Brazilian Association of Market Research Companies. The remaining variables were obtained by a structured questionnaire. Poisson regression analysis was used to test the association between variables using robust models and a subsequently adjusted model. Data were expressed as prevalence ratio (PR). Results: Multivariate analysis identified the following factors related to the experience of dental caries: residence in rural municipalities (PR: 1.15; 95%CI: 1.0-1.3), attending a city school (PR: 3.30; 95%CI: 1.1-9.4) or a state school (PR: 3.40; 95%CI: 1.1-9.6); and having an illiterate mother or a mother that only attended up to the 4th year of school (PR: 1.67; 95%CI: 1.1-2.4) or high school (PR: 1.54; 95%CI: 1.1-2.2). Conclusions: The presence of caries in students in southern Brazil was associated with residence in rural areas, mother with little education and attendance to a public school. PMID:27477791
A probabilistic tornado wind hazard model for the continental United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Q; Kimball, J; Mensing, R
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectanglemore » and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.« less
Structural zeroes and zero-inflated models.
He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul
2014-08-01
In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Kawaguchi, Hideaki; Koike, Soichi
2016-01-01
Regional disparity in suicide rates is a serious problem worldwide. One possible cause is unequal distribution of the health workforce, especially psychiatrists. Research about the association between regional physician numbers and suicide rates is therefore important but studies are rare. The objective of this study was to evaluate the association between physician numbers and suicide rates in Japan, by municipality. The study included all the municipalities in Japan (n = 1,896). We estimated smoothed standardized mortality ratios of suicide rates for each municipality and evaluated the association between health workforce and suicide rates using a hierarchical Bayesian model accounting for spatially correlated random effects, a conditional autoregressive model. We assumed a Poisson distribution for the observed number of suicides and set the expected number of suicides as the offset variable. The explanatory variables were numbers of physicians, a binary variable for the presence of psychiatrists, and social covariates. After adjustment for socioeconomic factors, suicide rates in municipalities that had at least one psychiatrist were lower than those in the other municipalities. There was, however, a positive and statistically significant association between the number of physicians and suicide rates. Suicide rates in municipalities that had at least one psychiatrist were lower than those in other municipalities, but the number of physicians was positively and significantly related with suicide rates. To improve the regional disparity in suicide rates, the government should encourage psychiatrists to participate in community-based suicide prevention programs and to settle in municipalities that currently have no psychiatrists. The government and other stakeholders should also construct better networks between psychiatrists and non-psychiatrists to support sharing of information for suicide prevention.
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
NASA Astrophysics Data System (ADS)
Lowe, Rachel; Bailey, Trevor C.; Stephenson, David B.; Graham, Richard J.; Coelho, Caio A. S.; Sá Carvalho, Marilia; Barcellos, Christovam
2011-03-01
This paper considers the potential for using seasonal climate forecasts in developing an early warning system for dengue fever epidemics in Brazil. In the first instance, a generalised linear model (GLM) is used to select climate and other covariates which are both readily available and prove significant in prediction of confirmed monthly dengue cases based on data collected across the whole of Brazil for the period January 2001 to December 2008 at the microregion level (typically consisting of one large city and several smaller municipalities). The covariates explored include temperature and precipitation data on a 2.5°×2.5° longitude-latitude grid with time lags relevant to dengue transmission, an El Niño Southern Oscillation index and other relevant socio-economic and environmental variables. A negative binomial model formulation is adopted in this model selection to allow for extra-Poisson variation (overdispersion) in the observed dengue counts caused by unknown/unobserved confounding factors and possible correlations in these effects in both time and space. Subsequently, the selected global model is refined in the context of the South East region of Brazil, where dengue predominates, by reverting to a Poisson framework and explicitly modelling the overdispersion through a combination of unstructured and spatio-temporal structured random effects. The resulting spatio-temporal hierarchical model (or GLMM—generalised linear mixed model) is implemented via a Bayesian framework using Markov Chain Monte Carlo (MCMC). Dengue predictions are found to be enhanced both spatially and temporally when using the GLMM and the Bayesian framework allows posterior predictive distributions for dengue cases to be derived, which can be useful for developing a dengue alert system. Using this model, we conclude that seasonal climate forecasts could have potential value in helping to predict dengue incidence months in advance of an epidemic in South East Brazil.
Stedman-Smith, Maggie; DuBois, Cathy L Z; Grey, Scott F; Kingsbury, Diana M; Shakya, Sunita; Scofield, Jennifer; Slenkovich, Ken
2015-04-01
To determine the effectiveness of an office-based multimodal hand hygiene improvement intervention in reducing self-reported communicable infections and work-related absence. A randomized cluster trial including an electronic training video, hand sanitizer, and educational posters (n = 131, intervention; n = 193, control). Primary outcomes include (1) self-reported acute respiratory infections (ARIs)/influenza-like illness (ILI) and/or gastrointestinal (GI) infections during the prior 30 days; and (2) related lost work days. Incidence rate ratios calculated using generalized linear mixed models with a Poisson distribution, adjusted for confounders and random cluster effects. A 31% relative reduction in self-reported combined ARI-ILI/GI infections (incidence rate ratio: 0.69; 95% confidence interval, 0.49 to 0.98). A 21% nonsignificant relative reduction in lost work days. An office-based multimodal hand hygiene improvement intervention demonstrated a substantive reduction in self-reported combined ARI-ILI/GI infections.
Modeling species-abundance relationships in multi-species collections
Peng, S.; Yin, Z.; Ren, H.; Guo, Q.
2003-01-01
Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.
Diversity of Poissonian populations.
Eliazar, Iddo I; Sokolov, Igor M
2010-01-01
Populations represented by collections of points scattered randomly on the real line are ubiquitous in science and engineering. The statistical modeling of such populations leads naturally to Poissonian populations-Poisson processes on the real line with a distinguished maximal point. Poissonian populations are infinite objects underlying key issues in statistical physics, probability theory, and random fractals. Due to their infiniteness, measuring the diversity of Poissonian populations depends on the lower-bound cut-off applied. This research characterizes the classes of Poissonian populations whose diversities are invariant with respect to the cut-off level applied and establishes an elemental connection between these classes and extreme-value theory. The measures of diversity considered are variance and dispersion, Simpson's index and inverse participation ratio, Shannon's entropy and Rényi's entropy, and Gini's index.
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
Partitioning neuronal variability
Goris, Robbe L.T.; Movshon, J. Anthony; Simoncelli, Eero P.
2014-01-01
Responses of sensory neurons differ across repeated measurements. This variability is usually treated as stochasticity arising within neurons or neural circuits. However, some portion of the variability arises from fluctuations in excitability due to factors that are not purely sensory, such as arousal, attention, and adaptation. To isolate these fluctuations, we developed a model in which spikes are generated by a Poisson process whose rate is the product of a drive that is sensory in origin, and a gain summarizing stimulus-independent modulatory influences on excitability. This model provides an accurate account of response distributions of visual neurons in macaque LGN, V1, V2, and MT, revealing that variability originates in large part from excitability fluctuations which are correlated over time and between neurons, and which increase in strength along the visual pathway. The model provides a parsimonious explanation for observed systematic dependencies of response variability and covariability on firing rate. PMID:24777419
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
A method of fitting the gravity model based on the Poisson distribution.
Flowerdew, R; Aitkin, M
1982-05-01
"In this paper, [the authors] suggest an alternative method for fitting the gravity model. In this method, the interaction variable is treated as the outcome of a discrete probability process, whose mean is a function of the size and distance variables. This treatment seems appropriate when the dependent variable represents a count of the number of items (people, vehicles, shipments) moving from one place to another. It would seem to have special advantages where there are some pairs of places between which few items move. The argument will be illustrated with reference to data on the numbers of migrants moving in 1970-1971 between pairs of the 126 labor market areas defined for Great Britain...." excerpt
Fractional Poisson Fields and Martingales
NASA Astrophysics Data System (ADS)
Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely
2018-02-01
We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.
On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action
NASA Astrophysics Data System (ADS)
Chekhov, L. O.; Mazzocco, M.
2017-12-01
Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.
Suicides in Active-Duty Enlisted Navy Personnel
1989-07-03
percent confidence intervals (95 percent CI) were calculated assuming a Poisson distribution ( Lilienfeld & Lilienfeld , 1980). Variables analyzed in this...34efforts to improve the status of women in the Army played a role in this reduction but we lack data to test this hypothesis (Rothberg et al., 1988...recruit population. Military Medicine, 141:327-331. Jobes, D.A., Berman, A.L., & Josselsen, A.R. (1986). The impact of psychological autopsies on
On the Singularity of the Vlasov-Poisson System
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Hong Qin, Jian Zheng
2013-04-26
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.
On the singularity of the Vlasov-Poisson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550
2013-09-15
The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.
A novel method for identifying settings for well-motivated ecologic studies of cancer
Stang, Andreas; Kowall, Bernd; Rusner, Carsten; Trabert, Britton; Bray, Freddie; Schüz, Joachim; McGlynn, Katherine A.; Kuss, Oliver
2016-01-01
A low within-country variability and a large between-country variability in cancer incidence may indicate that ecologic factors are involved in the etiology of the disease. The aim of this study is to explore the within- and between-country variability of cancer incidence to motivate high-quality ecologic studies. We extracted age-standardized incidence rate estimates (world standard population) from 135 regions for the 10 most frequent invasive cancers in Europe for non-Hispanic white populations from Cancer Incidence in Five Continents, Volume X. We fitted weighted multilevel Poisson regression models with random country effects for each cancer and sex. We estimated intraclass correlation coefficients (ICC) and 95% confidence intervals (95%CI). A high ICC indicates a low within and a high between-country variability of rates. The two cancer sites with the highest ICC among men were prostate cancer (0.96, 95%CI: 0.92–0.99) and skin melanoma (0.78, 0.64–0.93). Among women, high ICC were observed for lung cancer (0.84, 0.73–0.95) and breast cancer (0.80, 0.69–0.91). The two most prominent sex differences for ICC occurred for cancers of the head and neck (men: 0.70, 0.55–0.85, women: 0.19, 0.08–0.30) and breast cancer (men: 0.04, 0.01–0.07, women: 0.80, 0.69–0.91). ICCs were relatively low for pancreatic cancer (men: 0.23, 0.10–0.35; women: 0.13, 0.04–0.21) and leukemia (men: 0.12, 0.04–0.21; women: 0.08, 0.02–0.14). For cancers with high ICC for which systematic factors of the health care system, screening and diagnostic activities are not plausible explanations for between-country variations in incidence, cross-country sex-specific ecologic studies may be especially promising. PMID:26595447
A novel method for identifying settings for well-motivated ecologic studies of cancer.
Stang, Andreas; Kowall, Bernd; Rusner, Carsten; Trabert, Britton; Bray, Freddie; Schüz, Joachim; McGlynn, Katherine A; Kuss, Oliver
2016-04-15
A low within-country variability and a large between-country variability in cancer incidence may indicate that ecologic factors are involved in the etiology of the disease. The aim of this study is to explore the within- and between-country variability of cancer incidence to motivate high-quality ecologic studies. We extracted age-standardized incidence rate estimates (world standard population) from 135 regions for the ten most frequent invasive cancers in Europe for non-Hispanic white populations from Cancer Incidence in Five Continents, Volume X. We fitted weighted multilevel Poisson regression models with random country effects for each cancer and sex. We estimated intraclass correlation coefficients (ICCs) and 95% confidence intervals (95% CIs). A high ICC indicates a low within- and a high between-country variability of rates. The two cancer sites with the highest ICC among men were prostate cancer (0.96, 95% CI: 0.92-0.99) and skin melanoma (0.78, 0.64-0.93). Among women, high ICCs were observed for lung cancer (0.84, 0.73-0.95) and breast cancer (0.80, 0.69-0.91). The two most prominent sex differences for ICC occurred for cancers of the head and neck (men: 0.70, 0.55-0.85, women: 0.19, 0.08-0.30) and breast cancer (men: 0.04, 0.01-0.07, women: 0.80, 0.69-0.91). ICCs were relatively low for pancreatic cancer (men: 0.23, 0.10-0.35; women: 0.13, 0.04-0.21) and leukemia (men: 0.12, 0.04-0.21; women: 0.08, 0.02-0.14). For cancers with high ICC for which systematic factors of the health care system, screening and diagnostic activities are not plausible explanations for between-country variations in incidence, cross-country sex-specific ecologic studies may be especially promising. © 2015 UICC.
NASA Astrophysics Data System (ADS)
Tavala, Amir; Dovzhik, Krishna; Schicker, Klaus; Koschak, Alexandra; Zeilinger, Anton
Probing the visual system of human and animals at very low photon rate regime has recently attracted the quantum optics community. In an experiment on the isolated photoreceptor cells of Xenopus, the cell output signal was measured while stimulating it by pulses with sub-poisson distributed photons. The results showed single photon detection efficiency of 29 +/-4.7% [1]. Another behavioral experiment on human suggests a less detection capability at perception level with the chance of 0.516 +/-0.01 (i.e. slightly better than random guess) [2]. Although the species are different, both biological models and experimental observations with classical light stimuli expect that a fraction of single photon responses is filtered somewhere within the retina network and/or during the neural processes in the brain. In this ongoing experiment, we look for a quantitative answer to this question by measuring the output signals of the last neural layer of WT mouse retina using microelectrode arrays. We use a heralded downconversion single-photon source. We stimulate the retina directly since the eye lens (responsible for 20-50% of optical loss and scattering [2]) is being removed. Here, we demonstrate our first results that confirms the response to the sub-poisson distributied pulses. This project was supported by Austrian Academy of Sciences, SFB FoQuS F 4007-N23 funded by FWF and ERC QIT4QAD 227844 funded by EU Commission.
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Schupp, Peter; Vysoký, Jan
2014-06-01
We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.
Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G
2017-01-01
Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .
Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.
2016-01-01
Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590
Photon statistics in scintillation crystals
NASA Astrophysics Data System (ADS)
Bora, Vaibhav Joga Singh
Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are eld-tested, relatively inexpensive, and have good detection eciency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often dened by the Fano factor. The Fano factor of an integer-valued random process is dened as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI2:Eu, YAP:Ce and CsI:Na. We also found an empirical relationship between the Fano factor and the covariance as a function of time between two detectors looking at the same scintillation pulse. This empirical model was used to estimate the Fano factor of LaBr3:Ce and YAP:Ce using the experimentally measured timing-covariance. The estimates of the Fano factor from the time-covariance results were consistent with the estimates of the correlation between the integral signals. We found scintillation light from some scintillators to be sub-Poisson. For the same mean number of total scintillation photons, sub-Poisson light has lower noise. We then conducted a simulation study to investigate whether this low-noise sub-Poisson light can be used to improve spatial resolution. We calculated the Cramer-Rao bound for dierent detector geometries, position of interactions and Fano factors. The Cramer-Rao calculations were veried by generating simulated data and estimating the variance of the maximum likelihood estimator. We found that the Fano factor has no impact on the spatial resolution in gamma-ray imaging systems.
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
The process of learning in neural net models with Poisson and Gauss connectivities.
Sivridis, L; Kotini, A; Anninos, P
2008-01-01
In this study we examined the dynamic behavior of isolated and non-isolated neural networks with chemical markers that follow a Poisson or Gauss distribution of connectivity. The Poisson distribution shows higher activity in comparison to the Gauss distribution although the latter has more connections that obliterated due to randomness. We examined 57 hematoxylin and eosin stained sections from an equal number of autopsy specimens with a diagnosis of "cerebral matter within normal limits". Neural counting was carried out in 5 continuous optic fields, with the use of a simple optical microscope connected to a computer (software programmer Nikon Act-1 vers-2). The number of neurons that corresponded to a surface was equal to 0.15 mm(2). There was a gradual reduction in the number of neurons as age increased. A mean value of 45.8 neurons /0.15 mm(2) was observed within the age range 21-25, 33 neurons /0.15 mm(2) within the age range 41-45, 19.3 neurons /0.15 mm(2) within the age range 56-60 years. After the age of 60 it was observed that the number of neurons per unit area stopped decreasing. A correlation was observed between these experimental findings and the theoretical neural model developed by professor Anninos and his colleagues. Equivalence between the mean numbers of neurons of the above mentioned age groups and the highest possible number of synaptic connections per neuron (highest number of synaptic connections corresponded to the age group 21-25) was created. We then used both inhibitory and excitatory post-synaptic potentials and applied these values to the Poisson and Gauss distributions, whereas the neuron threshold was varied between 3 and 5. According to the obtained phase diagrams, the hysteresis loops decrease as age increases. These findings were significant as the hysteresis loops can be regarded as the basis for short-term memory.
Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev
2017-07-01
For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Poisson-event-based analysis of cell proliferation.
Summers, Huw D; Wills, John W; Brown, M Rowan; Rees, Paul
2015-05-01
A protocol for the assessment of cell proliferation dynamics is presented. This is based on the measurement of cell division events and their subsequent analysis using Poisson probability statistics. Detailed analysis of proliferation dynamics in heterogeneous populations requires single cell resolution within a time series analysis and so is technically demanding to implement. Here, we show that by focusing on the events during which cells undergo division rather than directly on the cells themselves a simplified image acquisition and analysis protocol can be followed, which maintains single cell resolution and reports on the key metrics of cell proliferation. The technique is demonstrated using a microscope with 1.3 μm spatial resolution to track mitotic events within A549 and BEAS-2B cell lines, over a period of up to 48 h. Automated image processing of the bright field images using standard algorithms within the ImageJ software toolkit yielded 87% accurate recording of the manually identified, temporal, and spatial positions of the mitotic event series. Analysis of the statistics of the interevent times (i.e., times between observed mitoses in a field of view) showed that cell division conformed to a nonhomogeneous Poisson process in which the rate of occurrence of mitotic events, λ exponentially increased over time and provided values of the mean inter mitotic time of 21.1 ± 1.2 hours for the A549 cells and 25.0 ± 1.1 h for the BEAS-2B cells. Comparison of the mitotic event series for the BEAS-2B cell line to that predicted by random Poisson statistics indicated that temporal synchronisation of the cell division process was occurring within 70% of the population and that this could be increased to 85% through serum starvation of the cell culture. © 2015 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul
2018-04-01
The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.
The Effect of Vaccination Coverage and Climate on Japanese Encephalitis in Sarawak, Malaysia
Impoinvil, Daniel E.; Ooi, Mong How; Diggle, Peter J.; Caminade, Cyril; Cardosa, Mary Jane; Morse, Andrew P.
2013-01-01
Background Japanese encephalitis (JE) is the leading cause of viral encephalitis across Asia with approximately 70,000 cases a year and 10,000 to 15,000 deaths. Because JE incidence varies widely over time, partly due to inter-annual climate variability effects on mosquito vector abundance, it becomes more complex to assess the effects of a vaccination programme since more or less climatically favourable years could also contribute to a change in incidence post-vaccination. Therefore, the objective of this study was to quantify vaccination effect on confirmed Japanese encephalitis (JE) cases in Sarawak, Malaysia after controlling for climate variability to better understand temporal dynamics of JE virus transmission and control. Methodology/principal findings Monthly data on serologically confirmed JE cases were acquired from Sibu Hospital in Sarawak from 1997 to 2006. JE vaccine coverage (non-vaccine years vs. vaccine years) and meteorological predictor variables, including temperature, rainfall and the Southern Oscillation index (SOI) were tested for their association with JE cases using Poisson time series analysis and controlling for seasonality and long-term trend. Over the 10-years surveillance period, 133 confirmed JE cases were identified. There was an estimated 61% reduction in JE risk after the introduction of vaccination, when no account is taken of the effects of climate. This reduction is only approximately 45% when the effects of inter-annual variability in climate are controlled for in the model. The Poisson model indicated that rainfall (lag 1-month), minimum temperature (lag 6-months) and SOI (lag 6-months) were positively associated with JE cases. Conclusions/significance This study provides the first improved estimate of JE reduction through vaccination by taking account of climate inter-annual variability. Our analysis confirms that vaccination has substantially reduced JE risk in Sarawak but this benefit may be overestimated if climate effects are ignored. PMID:23951373
A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.
Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen
2012-05-14
Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.
From Loss of Memory to Poisson.
ERIC Educational Resources Information Center
Johnson, Bruce R.
1983-01-01
A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)
Bayesian inference for unidirectional misclassification of a binary response trait.
Xia, Michelle; Gustafson, Paul
2018-03-15
When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.
Assessing uncertainty in published risk estimates using ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.
Smart materials systems through mesoscale patterning
NASA Astrophysics Data System (ADS)
Aksay, Ilhan A.; Groves, John T.; Gruner, Sol M.; Lee, P. C. Y.; Prud'homme, Robert K.; Shih, Wei-Heng; Torquato, Salvatore; Whitesides, George M.
1996-02-01
We report work on the fabrication of smart materials with two unique strategies: (1) self- assembly and (2) laser stereolithography. Both methods are akin to the processes used by biological systems. The first one is ideal for pattern development and the fabrication of miniaturized units in the submicron range and the second one in the 10 micrometer to 1 mm size range. By using these miniaturized units as building blocks, one can also produce smart material systems that can be used at larger length scales such as smart structural components. We have chosen to focus on two novel piezoceramic systems: (1) high-displacement piezoelectric actuators, and (2) piezoceramic hydrophone composites possessing negative Poisson ratio matrices. High-displacement actuators are essential in such applications as linear motors, pumps, switches, loud speakers, variable-focus mirrors, and laser deflectors. Arrays of such units can potentially be used for active vibration control of helicopter rotors as well as the fabrication of adaptive rotors. In the case of piezoceramic hydrophone composites, we utilize matrices having a negative Poisson's ratio in order to produce highly sensitive, miniaturized sensors. We envision such devices having promising new application areas such as the implantation of hydrophones in small blood vessels to monitor blood pressure. Negative Poisson ratio materials have promise as robust shock absorbers, air filters, and fasteners, and hence, can be used in aircraft and land vehicles.
ERIC Educational Resources Information Center
Kong, Nan
2007-01-01
In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE
Nonlocal Poisson-Fermi model for ionic solvent.
Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob
2016-07-01
We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.
Nonlinear Poisson Equation for Heterogeneous Media
Hu, Langhua; Wei, Guo-Wei
2012-01-01
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.
Borda, Alfredo; Sanz, Belén; Otero, Laura; Blasco, Teresa; García-Gómez, Francisco J; de Andrés, Fuencisla
2011-01-01
To analyze the association between travel time and participation in a breast cancer screening program adjusted for contextual variables in the province of Segovia (Spain). We performed an ecological study using the following data sources: the Breast Cancer Early Detection Program of the Primary Care Management of Segovia, the Population and Housing Census for 2001 and the municipal register for 2006-2007. The study period comprised January 2006 to December 2007. Dependent variables consisted of the municipal participation rate and the desired level of municipal participation (greater than or equal to 70%). The key independent variable was travel time from the municipality to the mammography unit. Covariables consisted of the municipalities' demographic and socioeconomic factors. We performed univariate and multivariate Poisson regression analyses of the participation rate, and logistic regression of the desired participation level. The sample was composed of 178 municipalities. The mean participation rate was 75.2%. The desired level of participation (≥ 70%) was achieved in 119 municipalities (67%). In the multivariate Poisson and logistic regression analyses, longer travel time was associated with a lower participation rate and with lower participation after adjustment was made for geographic density, age, socioeconomic status and dependency ratio, with a relative risk index of 0.88 (95% CI: 0.81-0.96) and an odds ratio of 0.22 (95% CI: 0.1-0.47), respectively. Travel time to the mammography unit may help to explain participation in breast cancer screening programs. Copyright © 2010 SESPAS. Published by Elsevier Espana. All rights reserved.
Escobedo, Angel A; Almirall, Pedro; Rumbaut, Raisa; Rodríguez-Morales, Alfonso J
2015-01-01
Climate change and variability are common phenomena affecting various infectious diseases. Many studies have been performed on vector-borne diseases; however, few studies have addressed such influences on intestinal parasitic diseases (e.g., giardiasis). In this study, using nonlinear Poisson regression models, we assessed the potential associations between macroclimatic variation and giardiasis cases in children and school workers from three provinces of Cuba in the context of large sampling and parasitological assessment. Between 2010 and 2012, 293,019 subjects were assessed, resulting in 6357 positive for Giardia (216.95 cases/10,000 pop.; 95%CI 211.7-222.2). The variation in time for those giardiasis rates ranged from 35.8 to 525.8 cases/10,000 pop. Nonlinear Poisson regression models between the ONI index and the giardiasis incidence indicated a significant association (p<0.01). With lower values of ONI, lower incidence of giardiasis was observed at Havana (pseudo r(2)=0.0576; p<0.001) and Guantánamo (pseudo r(2)=0.0376; p<0.001). Although these results are preliminary and the magnitude of association is not higher, the results were of statistical significance. This result indicates the need to assess in detail in further studies the impact of additional macroclimatic and microclimatic variables on the epidemiology of this still important intestinal parasitic disease, not only in Cuba but also in other countries of the Caribbean and Latin American region. Copyright © 2014 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.
Anderson Localization in Quark-Gluon Plasma
NASA Astrophysics Data System (ADS)
Kovács, Tamás G.; Pittler, Ferenc
2010-11-01
At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
NASA Astrophysics Data System (ADS)
Sidorova, Mariia; Semenov, Alexej; Hübers, Heinz-Wilhelm; Charaev, Ilya; Kuzmin, Artem; Doerner, Steffen; Siegel, Michael
2017-11-01
We studied timing jitter in the appearance of photon counts in meandering nanowires with different fractional amount of bends. Intrinsic timing jitter, which is the probability density function of the random time delay between photon absorption in current-carrying superconducting nanowire and appearance of the normal domain, reveals two different underlying physical mechanisms. In the deterministic regime, which is realized at large photon energies and large currents, jitter is controlled by position-dependent detection threshold in straight parts of meanders. It decreases with the increase in the current. At small photon energies, jitter increases and its current dependence disappears. In this probabilistic regime jitter is controlled by Poisson process in that magnetic vortices jump randomly across the wire in areas adjacent to the bends.
Saint-Venant end effects for materials with negative Poisson's ratios
NASA Technical Reports Server (NTRS)
Lakes, R. S.
1992-01-01
Results are presented from an analysis of Saint-Venant end effects for materials with negative Poisson's ratio. Examples are presented showing that slow decay of end stress occurs in circular cylinders of negative Poisson's ratio, whereas a sandwich panel containing rigid face sheets and a compliant core exhibits no anomalous effects for negative Poisson's ratio (but exhibits slow stress decay for core Poisson's ratios approaching 0.5). In sand panels with stiff but not perfectly rigid face sheets, a negative Poisson's ratio results in end stress decay, which is faster than it would be otherwise. It is suggested that the slow decay previously predicted for sandwich strips in plane deformation as a result of the geometry can be mitigated by the use of a negative Poisson's ratio material for the core.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Singer, Donald A.; Menzie, W.D.; Cheng, Qiuming; Bonham-Carter, G. F.
2005-01-01
Estimating numbers of undiscovered mineral deposits is a fundamental part of assessing mineral resources. Some statistical tools can act as guides to low variance, unbiased estimates of the number of deposits. The primary guide is that the estimates must be consistent with the grade and tonnage models. Another statistical guide is the deposit density (i.e., the number of deposits per unit area of permissive rock in well-explored control areas). Preliminary estimates and confidence limits of the number of undiscovered deposits in a tract of given area may be calculated using linear regression and refined using frequency distributions with appropriate parameters. A Poisson distribution leads to estimates having lower relative variances than the regression estimates and implies a random distribution of deposits. Coefficients of variation are used to compare uncertainties of negative binomial, Poisson, or MARK3 empirical distributions that have the same expected number of deposits as the deposit density. Statistical guides presented here allow simple yet robust estimation of the number of undiscovered deposits in permissive terranes.
Xiao, H; Gao, L D; Li, X J; Lin, X L; Dai, X Y; Zhu, P J; Chen, B Y; Zhang, X X; Zhao, J; Tian, H Y
2013-09-01
The transmission of haemorrhagic fever with renal syndrome (HFRS) is influenced by climatic, reservoir and environmental variables. The epidemiology of the disease was studied over a 6-year period in Changsha. Variables relating to climate, environment, rodent host distribution and disease occurrence were collected monthly and analysed using a time-series adjusted Poisson regression model. It was found that the density of the rodent host and multivariate El Niño Southern Oscillation index had the greatest effect on the transmission of HFRS with lags of 2–6 months. However, a number of climatic and environmental factors played important roles in affecting the density and transmission potential of the rodent host population. It was concluded that the measurement of a number of these variables could be used in disease surveillance to give useful advance warning of potential disease epidemics.
Characterization of Nonhomogeneous Poisson Processes Via Moment Conditions.
1986-08-01
Poisson processes play an important role in many fields. The Poisson process is one of the simplest counting processes and is a building block for...place of independent increments. This provides a somewhat different viewpoint for examining Poisson processes . In addition, new characterizations for
Constructions and classifications of projective Poisson varieties.
Pym, Brent
2018-01-01
This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.
Constructions and classifications of projective Poisson varieties
NASA Astrophysics Data System (ADS)
Pym, Brent
2018-03-01
This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.
2013-01-01
Background High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.org. PMID:23965047
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Intervention-Based Stochastic Disease Eradication
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira
2013-03-01
Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Influence diagnostics for count data under AB-BA crossover trials.
Hao, Chengcheng; von Rosen, Dietrich; von Rosen, Tatjana
2017-12-01
This paper aims to develop diagnostic measures to assess the influence of data perturbations on estimates in AB-BA crossover studies with a Poisson distributed response. Generalised mixed linear models with normally distributed random effects are utilised. We show that in this special case, the model can be decomposed into two independent sub-models which allow to derive closed-form expressions to evaluate the changes in the maximum likelihood estimates under several perturbation schemes. The performance of the new influence measures is illustrated by simulation studies and the analysis of a real dataset.
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Reisdorfer, Emilene; Büchele, Fátima; Pires, Rodrigo Otávio Moretti; Boing, Antonio Fernando
2012-09-01
The study aimed to describe the prevalence of alcohol use disorders in an adult population from Brazil and its association with demographic, socioeconomic, behavioral variables and health conditions. A population-based cross-sectional survey was conducted with adults (20 to 59 years) of a medium-sized city in Southern Brazil with a random sample of 1,720 individuals. Cluster sampling was done in two stages: census tract first and household second. Alcohol use disorders were measured using the Alcohol Use Disorders Identification Test (AUDIT) and associations were tested with selected variables by Poisson Regression. Results of multivariate analysis were expressed as prevalence ratios. The prevalence of alcohol use disorders in the population was 18.4% (95% CI: 16.6% - 20.3%), higher among men (29.9%) than in women (9.3%). The prevalence of abstinence was 30.6%; 6.8% of respondents had already caused problems to themselves or to others after drinking; and 10.3% reported that a relative, friend or doctor had already shown concern on their drinking. After multivariate analysis, an association with alcohol use disorders remained for: being male, age 20 to 29 years, being single, declaring to be light-skinned blacks and being an ex-smoker or current smoker. The prevalence of alcohol use disorders identified is high compared with other similar studies, with differences according to being male, age 20 to 29, skin color and tobacco use. These issues must be considered in formulating public health policies aimed at reducing problems related to alcohol use.
Spatial spread of dengue in a non-endemic tropical city in northern Argentina.
Gil, José F; Palacios, Maximiliano; Krolewiecki, Alejandro J; Cortada, Pedro; Flores, Rosana; Jaime, Cesar; Arias, Luis; Villalpando, Carlos; Alberti DÁmato, Anahí M; Nasser, Julio R; Aparicio, Juan P
2016-06-01
After more than eighty years dengue reemerged in Argentina in 1997. Since then, the largest epidemic in terms of geographical extent, magnitude and mortality, was recorded in 2009. In this report we analyzed the DEN-1 epidemic spread in Orán, a mid-size city in a non-endemic tropical area in Northern Argentina, and its correlation with demographic and socioeconomic factors. Cases were diagnosed by ELISA between January and June 2009. We applied a space-time and spatial scan statistic under a Poisson model. Possible association between dengue incidence and socio-economic variables was studied with the Spearman correlation test. The epidemic started from an imported case from Bolivia and space-time analysis detected two clusters: one on February and other in April (in the south and the northeast of the city respectively) with risk ratios of 25.24 and 4.07 (p<0.01). Subsequent cases spread widely around the city without significant space-temporal clustering. Maximum values of the entomological indices were observed in January, at the beginning of the epidemic (B=21.96; LH=8.39). No statistically significant association between socioeconomic variables and dengue incidence was found but positive correlation between population size and the number of cases (p<0.05) was detected. Two mechanisms may explain the observed pattern of epidemic spread in this non-endemic tropical city: a) Short range dispersal of mosquitoes and people generates clusters of cases and b) long-distance (within the city) human movement contributes to a quasi-random distribution of cases. Copyright © 2016 Elsevier B.V. All rights reserved.
Adolescent alcohol use and parental and adolescent socioeconomic position in six European cities.
Bosque-Prous, Marina; Kuipers, Mirte A G; Espelt, Albert; Richter, Matthias; Rimpelä, Arja; Perelman, Julian; Federico, Bruno; Brugal, M Teresa; Lorant, Vincent; Kunst, Anton E
2017-08-08
Many risk behaviours in adolescence are socially patterned. However, it is unclear to what extent socioeconomic position (SEP) influences adolescent drinking in various parts of Europe. We examined how alcohol consumption is associated with parental SEP and adolescents' own SEP among students aged 14-17 years. Cross-sectional data were collected in the 2013 SILNE study. Participants were 8705 students aged 14-17 years from 6 European cities. The dependent variable was weekly binge drinking. Main independent variables were parental SEP (parental education level and family affluence) and adolescents' own SEP (student weekly income and academic achievement). Multilevel Poisson regression models with robust variance and random intercept were fitted to estimate the association between adolescent drinking and SEP. Prevalence of weekly binge drinking was 4.2% (95%CI = 3.8-4.6). Weekly binge drinking was not associated with parental education or family affluence. However, weekly binge drinking was less prevalent in adolescents with high academic achievement than those with low achievement (PR = 0.34; 95%CI = 0.14-0.87), and more prevalent in adolescents with >€50 weekly income compared to those with ≤€5/week (PR = 3.14; 95%CI = 2.23-4.42). These associations were found to vary according to country, but not according to gender or age group. Across the six European cities, adolescent drinking was associated with adolescents' own SEP, but not with parental SEP. Socio-economic inequalities in adolescent drinking seem to stem from adolescents' own situation rather than that of their family.
Cultural Accommodation of Group Substance Abuse Treatment for Latino Adolescents: Results of an RCT
Burrow-Sánchez, Jason J.; Minami, Takuya; Hops, Hyman
2015-01-01
Objectives Comparative studies examining the difference between empirically supported substance abuse treatments versus their culturally accommodated counterparts with participants from a single ethnic minority group are frequently called for in the literature but infrequently conducted in practice. This RCT was conducted to compare the efficacy of an empirically supported standard version of a group-based cognitive-behavioral treatment (S-CBT) to a culturally accommodated version (A-CBT) with a sample of Latino adolescents primarily recruited from the juvenile justice system. Development of the culturally accommodated treatment and testing was guided by the Cultural Accommodation Model for Substance Abuse Treatment (CAM-SAT). Methods Seventy Latino adolescents (mean age = 15.2; 90% male) were randomly assigned to one of two group-based treatment conditions (S-CBT = 36; A-CBT = 34) with assessments conducted at pretreatment, posttreatment, and 3-month follow-up. Longitudinal Poisson mixed models for count data were used to conduct the major analyses. The primary outcome variable in the analytic models was the number of days any substance was used (including alcohol, except tobacco) in the past 90 days. In addition, the variables ethnic identity, familism, and acculturation were included as cultural moderators in the analysis. Results Although both conditions produced significant decreases in substance use, the results did not support a time by treatment condition interaction; however, outcomes were moderated by ethnic identity and familism. Conclusions The findings are discussed with implications for research and practice within the context of providing culturally relevant treatment for Latino adolescents with substance use disorders. PMID:25602465
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2006-11-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Cartier, Pierre; DeWitt-Morette, Cecile
2010-06-01
Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.
The Euler-Poisson-Darboux equation for relativists
NASA Astrophysics Data System (ADS)
Stewart, John M.
2009-09-01
The Euler-Poisson-Darboux (EPD) equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, and as such must be of interest as a paradigm to relativists. Sadly it receives scant treatment in the textbooks. The first half of this review is didactic in nature. It discusses in the simplest terms possible the nature of solutions of the EPD equation for the timelike and spacelike singularity cases. Also covered is the Riemann representation of solutions of the characteristic initial value problem, which is hard to find in the literature. The second half examines a few of the possible applications, ranging from explicit computation of the leading terms in the far-field backscatter from predominantly outgoing radiation in a Schwarzschild space-time, to computing explicitly the leading terms in the matter-induced singularities in plane symmetric space-times. There are of course many other applications and the aim of this article is to encourage relativists to investigate this underrated paradigm.
Estimating False Positive Contamination in Crater Annotations from Citizen Science Data
NASA Astrophysics Data System (ADS)
Tar, P. D.; Bugiolacchi, R.; Thacker, N. A.; Gilmour, J. D.
2017-01-01
Web-based citizen science often involves the classification of image features by large numbers of minimally trained volunteers, such as the identification of lunar impact craters under the Moon Zoo project. Whilst such approaches facilitate the analysis of large image data sets, the inexperience of users and ambiguity in image content can lead to contamination from false positive identifications. We give an approach, using Linear Poisson Models and image template matching, that can quantify levels of false positive contamination in citizen science Moon Zoo crater annotations. Linear Poisson Models are a form of machine learning which supports predictive error modelling and goodness-of-fits, unlike most alternative machine learning methods. The proposed supervised learning system can reduce the variability in crater counts whilst providing predictive error assessments of estimated quantities of remaining true verses false annotations. In an area of research influenced by human subjectivity, the proposed method provides a level of objectivity through the utilisation of image evidence, guided by candidate crater identifications.
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
Nonlinear Poisson equation for heterogeneous media.
Hu, Langhua; Wei, Guo-Wei
2012-08-22
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Jacob, Benjamin J; Krapp, Fiorella; Ponce, Mario; Gottuzzo, Eduardo; Griffith, Daniel A; Novak, Robert J
2010-05-01
Spatial autocorrelation is problematic for classical hierarchical cluster detection tests commonly used in multi-drug resistant tuberculosis (MDR-TB) analyses as considerable random error can occur. Therefore, when MDRTB clusters are spatially autocorrelated the assumption that the clusters are independently random is invalid. In this research, a product moment correlation coefficient (i.e., the Moran's coefficient) was used to quantify local spatial variation in multiple clinical and environmental predictor variables sampled in San Juan de Lurigancho, Lima, Peru. Initially, QuickBird 0.61 m data, encompassing visible bands and the near infra-red bands, were selected to synthesize images of land cover attributes of the study site. Data of residential addresses of individual patients with smear-positive MDR-TB were geocoded, prevalence rates calculated and then digitally overlaid onto the satellite data within a 2 km buffer of 31 georeferenced health centers, using a 10 m2 grid-based algorithm. Geographical information system (GIS)-gridded measurements of each health center were generated based on preliminary base maps of the georeferenced data aggregated to block groups and census tracts within each buffered area. A three-dimensional model of the study site was constructed based on a digital elevation model (DEM) to determine terrain covariates associated with the sampled MDR-TB covariates. Pearson's correlation was used to evaluate the linear relationship between the DEM and the sampled MDR-TB data. A SAS/GIS(R) module was then used to calculate univariate statistics and to perform linear and non-linear regression analyses using the sampled predictor variables. The estimates generated from a global autocorrelation analyses were then spatially decomposed into empirical orthogonal bases using a negative binomial regression with a non-homogeneous mean. Results of the DEM analyses indicated a statistically non-significant, linear relationship between georeferenced health centers and the sampled covariate elevation. The data exhibited positive spatial autocorrelation and the decomposition of Moran's coefficient into uncorrelated, orthogonal map pattern components revealed global spatial heterogeneities necessary to capture latent autocorrelation in the MDR-TB model. It was thus shown that Poisson regression analyses and spatial eigenvector mapping can elucidate the mechanics of MDR-TB transmission by prioritizing clinical and environmental-sampled predictor variables for identifying high risk populations.
Safety assessment of a shallow foundation using the random finite element method
NASA Astrophysics Data System (ADS)
Zaskórski, Łukasz; Puła, Wojciech
2015-04-01
A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.
Malaria transmission in two localities in north-western Argentina
Dantur Juri, María J; Zaidenberg, Mario; Claps, Guillermo L; Santana, Mirta; Almirón, Walter R
2009-01-01
Background Malaria is one of the most important tropical diseases that affects people globally. The influence of environmental conditions in the patterns of temporal distribution of malaria vectors and the disease has been studied in different countries. In the present study, ecological aspects of the malaria vector Anopheles (Anopheles) pseudopunctipennis and their relationship with climatic variables, as well as the seasonality of malaria cases, were studied in two localities, El Oculto and Aguas Blancas, in north-western Argentina. Methods The fluctuation of An. pseudopunctipennis and the malaria cases distribution was analysed with Random Effect Poisson Regression. This analysis takes into account the effect of each climatic variable on the abundance of both vector and malaria cases, giving as results predicted values named Incidence Rate Radio. Results The number of specimens collected in El Oculto and Aguas Blancas was 4224 (88.07%) and 572 (11.93%), respectively. In El Oculto no marked seasonality was found, different from Aguas Blancas, where high abundance was detected at the end of spring and the beginning of summer. The maximum mean temperature affected the An. pseudopunctipennis fluctuation in El Oculto and Aguas Blancas. When considering the relationship between the number of malaria cases and the climatic variables in El Oculto, maximum mean temperature and accumulated rainfall were significant, in contrast with Aguas Blancas, where mean temperature and humidity showed a closer relationship to the fluctuation in the disease. Conclusion The temporal distribution patterns of An. pseudopunctipennis vary in both localities, but spring appears as the season with better conditions for mosquito development. Maximum mean temperature was the most important variable in both localities. Malaria cases were influenced by the maximum mean temperature in El Oculto, while the mean temperature and humidity were significant in Aguas Blancas. In Aguas Blancas peaks of mosquito abundance and three months later, peaks of malaria cases were observed. The study reported here will help to increase knowledge about not only vectors and malaria seasonality but also their relationships with the climatic variables that influence their appearances and abundances. PMID:19152707
Contextuality in canonical systems of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Statistical properties of solar flares and coronal mass ejections through the solar cycle
NASA Astrophysics Data System (ADS)
Telloni, Daniele; Carbone, Vincenzo; Lepreti, Fabio; Antonucci, Ester
2016-03-01
Waiting Time Distributions (WTDs) of solar flares are investigated all through the solar cycle. The same approach applied to Coronal Mass Ejections (CMEs) in a previous work is considered here for flare occurrence. Our analysis reveals that flares and CMEs share some common statistical properties, which result dependent on the level of solar activity. Both flares and CMEs seem to independently occur during minimum solar activity phases, whilst their WTDs significantly deviate from a Poisson function at solar maximum, thus suggesting that these events are correlated. The characteristics of WTDs are constrained by the physical processes generating those eruptions associated with flares and CMEs. A scenario may be drawn in which different mechanisms are actively at work during different phases of the solar cycle. Stochastic processes, most likely related to random magnetic reconnections of the field lines, seem to play a key role during solar minimum periods. On the other hand, persistent processes, like sympathetic eruptions associated to the variability of the photospheric magnetism, are suggested to dominate during periods of high solar activity. Moreover, despite the similar statistical properties shown by flares and CMEs, as it was mentioned above, their WTDs appear different in some aspects. During solar minimum periods, the flare occurrence randomness seems to be more evident than for CMEs. Those persistent mechanisms generating interdependent events during maximum periods of solar activity can be suggested to play a more important role for CMEs than for flares, thus mitigating the competitive action of the random processes, which seem instead strong enough to weaken the correlations among flare event occurrence during solar minimum periods. However, it cannot be excluded that the physical processes at the basis of the origin of the temporal correlation between solar events are different for flares and CMEs, or that, more likely, more sophisticated effects are at work at the same time leading to an even more complex picture. This work represents a first step for further investigations.
Sandora, Thomas J; Shih, Mei-Chiung; Goldmann, Donald A
2008-06-01
Students often miss school because of gastrointestinal and respiratory illnesses. We assessed the effectiveness of a multifactorial intervention, including alcohol-based hand-sanitizer and surface disinfection, in reducing absenteeism caused by gastrointestinal and respiratory illnesses in elementary school students. We performed a school-based cluster-randomized, controlled trial at a single elementary school. Eligible students in third to fifth grade were enrolled. Intervention classrooms received alcohol-based hand sanitizer to use at school and quaternary ammonium wipes to disinfect classroom surfaces daily for 8 weeks; control classrooms followed usual hand-washing and cleaning practices. Parents completed a preintervention demographic survey. Absences were recorded along with the reason for absence. Swabs of environmental surfaces were evaluated by bacterial culture and polymerase chain reaction for norovirus, respiratory syncytial virus, influenza, and parainfluenza 3. The primary outcomes were rates of absenteeism caused by gastrointestinal or respiratory illness. Days absent were modeled as correlated Poisson variables and compared between groups by using generalized estimating equations. Analyses were adjusted for family size, race, health status, and home sanitizer use. We also compared the presence of viruses and the total bacterial colony counts on several classroom surfaces. A total of 285 students were randomly assigned; baseline demographics were similar in the 2 groups. The adjusted absenteeism rate for gastrointestinal illness was significantly lower in the intervention-group subjects compared with control subjects. The adjusted absenteeism rate for respiratory illness was not significantly different between groups. Norovirus was the only virus detected and was found less frequently on surfaces in intervention classrooms compared with control classrooms (9% vs 29%). A multifactorial intervention including hand sanitizer and surface disinfection reduced absenteeism caused by gastrointestinal illness in elementary school students. Norovirus was found less often on classroom surfaces in the intervention group. Schools should consider adopting these practices to reduce days lost to common illnesses.
Advanced analysis of forest fire clustering
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Pereira, Mario; Golay, Jean
2017-04-01
Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.
Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin
2014-01-01
A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.
Scaling Limits and Generic Bounds for Exploration Processes
NASA Astrophysics Data System (ADS)
Bermolen, Paola; Jonckheere, Matthieu; Sanders, Jaron
2017-12-01
We consider exploration algorithms of the random sequential adsorption type both for homogeneous random graphs and random geometric graphs based on spatial Poisson processes. At each step, a vertex of the graph becomes active and its neighboring nodes become blocked. Given an initial number of vertices N growing to infinity, we study statistical properties of the proportion of explored (active or blocked) nodes in time using scaling limits. We obtain exact limits for homogeneous graphs and prove an explicit central limit theorem for the final proportion of active nodes, known as the jamming constant, through a diffusion approximation for the exploration process which can be described as a unidimensional process. We then focus on bounding the trajectories of such exploration processes on random geometric graphs, i.e., random sequential adsorption. As opposed to exploration processes on homogeneous random graphs, these do not allow for such a dimensional reduction. Instead we derive a fundamental relationship between the number of explored nodes and the discovered volume in the spatial process, and we obtain generic bounds for the fluid limit and jamming constant: bounds that are independent of the dimension of space and the detailed shape of the volume associated to the discovered node. Lastly, using coupling techinques, we give trajectorial interpretations of the generic bounds.
DuBois, Cathy L.Z.; Grey, Scott F.; Kingsbury, Diana M.; Shakya, Sunita; Scofield, Jennifer; Slenkovich, Ken
2015-01-01
Objective: To determine the effectiveness of an office-based multimodal hand hygiene improvement intervention in reducing self-reported communicable infections and work-related absence. Methods: A randomized cluster trial including an electronic training video, hand sanitizer, and educational posters (n = 131, intervention; n = 193, control). Primary outcomes include (1) self-reported acute respiratory infections (ARIs)/influenza-like illness (ILI) and/or gastrointestinal (GI) infections during the prior 30 days; and (2) related lost work days. Incidence rate ratios calculated using generalized linear mixed models with a Poisson distribution, adjusted for confounders and random cluster effects. Results: A 31% relative reduction in self-reported combined ARI-ILI/GI infections (incidence rate ratio: 0.69; 95% confidence interval, 0.49 to 0.98). A 21% nonsignificant relative reduction in lost work days. Conclusions: An office-based multimodal hand hygiene improvement intervention demonstrated a substantive reduction in self-reported combined ARI-ILI/GI infections. PMID:25719534
Dimensional study of the dynamical arrest in a random Lorentz gas.
Jin, Yuliang; Charbonneau, Patrick
2015-04-01
The random Lorentz gas (RLG) is a minimal model for transport in heterogeneous media. Upon increasing the obstacle density, it exhibits a growing subdiffusive transport regime and then a dynamical arrest. Here, we study the dimensional dependence of the dynamical arrest, which can be mapped onto the void percolation transition for Poisson-distributed point obstacles. We numerically determine the arrest in dimensions d=2-6. Comparison of the results with standard mode-coupling theory reveals that the dynamical theory prediction grows increasingly worse with d. In an effort to clarify the origin of this discrepancy, we relate the dynamical arrest in the RLG to the dynamic glass transition of the infinite-range Mari-Kurchan-model glass former. Through a mixed static and dynamical analysis, we then extract an improved dimensional scaling form as well as a geometrical upper bound for the arrest. The results suggest that understanding the asymptotic behavior of the random Lorentz gas may be key to surmounting fundamental difficulties with the mode-coupling theory of glasses.
A Demographic Analysis of Suicide Among U.S. Navy Personnel
1997-08-01
estimates of a Poisson- distributed variable according to the procedure described in Lilienfeld and Lilienfeld .27 Based on averaged age-specific rates of...n suicides, the total number of pairs will be n(n- 1)/2). The Knox method tests the null hypothesis that the event of a pair of suicides being close...significantly differ. It is likely, however, that the military’s required suicide prevention programs and psychological autopsies help to ascertain as
Models for forecasting the flowering of Cornicabra olive groves.
Rojo, Jesús; Pérez-Badia, Rosa
2015-11-01
This study examined the impact of weather-related variables on flowering phenology in the Cornicabra olive tree and constructed models based on linear and Poisson regression to forecast the onset and length of the pre-flowering and flowering phenophases. Spain is the world's leading olive oil producer, and the Cornicabra variety is the second largest Spanish variety in terms of surface area. However, there has been little phenological research into this variety. Phenological observations were made over a 5-year period (2009-2013) at four sampling sites in the province of Toledo (central Spain). Results showed that the onset of the pre-flowering phase is governed largely by temperature, which displayed a positive correlation with the temperature in the start of dormancy (November) and a negative correlation during the months prior to budburst (January, February and March). A similar relationship was recorded for the onset of flowering. Other weather-related variables, including solar radiation and rainfall, also influenced the succession of olive flowering phenophases. Linear models proved the most suitable for forecasting the onset and length of the pre-flowering period and the onset of flowering. The onset and length of pre-flowering can be predicted up to 1 or 2 months prior to budburst, whilst the onset of flowering can be forecast up to 3 months beforehand. By contrast, a nonlinear model using Poisson regression was best suited to predict the length of the flowering period.
Unimodularity criteria for Poisson structures on foliated manifolds
NASA Astrophysics Data System (ADS)
Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury
2018-03-01
We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Regional differences associated with drinking and driving in Brazil.
De Boni, Raquel; von Diemen, Lisia; Duarte, Paulina do Carmo Arruda Vieira; Bumaguin, Daniela Benzano; Hilgert, Juliana Balbinot; Bozzetti, Mary Clarisse; Sordi, Anne; Pechansky, Flavio
2012-10-01
To evaluate regional differences and similarities associated with drinking and driving (DUI) in the five Brazilian macro-regions. A roadside survey was conducted in the 27 Brazilian state capitals. A total of 3,398 drivers were randomly selected and given a structured interview and a breathalyzer test. To determine the predictors of positive blood alcohol concentration (BAC) in each region, a MANOVA was performed, and 3 groups were used as follows: 1) North and Northeast, 2) South and Midwest, and 3) Southeast. A Poisson robust regression model was performed to assess the variables associated with positive BAC in each group. Of all surveyed drivers, 2,410 had consumed alcohol in the previous 12 months. Most were male, with a median age of 36. Leisure as the reason for travel was associated with positive BAC in all 3 groups. Low schooling, being older than 30, driving cars or motorcycles and having been given a breathalyzer test at least once in their lives predicted DUI in at least two different groups. Factors , especially low schooling and leisure as a reason for travel, associated with drinking and driving were similar among regions, although certain region-specific features were observed. This information is important for aiming to reduce DUI in the country.
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2018-04-01
This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G
2013-12-01
Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.
Calculation of the Poisson cumulative distribution function
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading
Sanborn, Brett; Song, Bo
2018-06-03
Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less
Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanborn, Brett; Song, Bo
Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Austin, Peter C.; Stryhn, Henrik; Leckie, George; Merlo, Juan
2017-01-01
Multilevel data occur frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models. These models incorporate cluster‐specific random effects that allow one to partition the total variation in the outcome into between‐cluster variation and between‐individual variation. The magnitude of the effect of clustering provides a measure of the general contextual effect. When outcomes are binary or time‐to‐event in nature, the general contextual effect can be quantified by measures of heterogeneity like the median odds ratio or the median hazard ratio, respectively, which can be calculated from a multilevel regression model. Outcomes that are integer counts denoting the number of times that an event occurred are common in epidemiological and medical research. The median (incidence) rate ratio in multilevel Poisson regression for counts that corresponds to the median odds ratio or median hazard ratio for binary or time‐to‐event outcomes respectively is relatively unknown and is rarely used. The median rate ratio is the median relative change in the rate of the occurrence of the event when comparing identical subjects from 2 randomly selected different clusters that are ordered by rate. We also describe how the variance partition coefficient, which denotes the proportion of the variation in the outcome that is attributable to between‐cluster differences, can be computed with count outcomes. We illustrate the application and interpretation of these measures in a case study analyzing the rate of hospital readmission in patients discharged from hospital with a diagnosis of heart failure. PMID:29114926
A Martingale Characterization of Mixed Poisson Processes.
1985-10-01
03LA A 11. TITLE (Inciuae Security Clanafication, ",A martingale characterization of mixed Poisson processes " ________________ 12. PERSONAL AUTHOR... POISSON PROCESSES Jostification .......... . ... . . Di.;t ib,,jtion by Availability Codes Dietmar Pfeifer* Technical University Aachen Dist Special and...Mixed Poisson processes play an important role in many branches of applied probability, for instance in insurance mathematics and physics (see Albrecht
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
Deformation mechanisms in negative Poisson's ratio materials - Structural aspects
NASA Technical Reports Server (NTRS)
Lakes, R.
1991-01-01
Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.
Exact solution for the Poisson field in a semi-infinite strip.
Cohen, Yossi; Rothman, Daniel H
2017-04-01
The Poisson equation is associated with many physical processes. Yet exact analytic solutions for the two-dimensional Poisson field are scarce. Here we derive an analytic solution for the Poisson equation with constant forcing in a semi-infinite strip. We provide a method that can be used to solve the field in other intricate geometries. We show that the Poisson flux reveals an inverse square-root singularity at a tip of a slit, and identify a characteristic length scale in which a small perturbation, in a form of a new slit, is screened by the field. We suggest that this length scale expresses itself as a characteristic spacing between tips in real Poisson networks that grow in response to fluxes at tips.
Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F
1998-12-01
This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one
Minois, Nathan; Savy, Stéphanie; Lauwers-Cances, Valérie; Andrieu, Sandrine; Savy, Nicolas
2017-03-01
Recruiting patients is a crucial step of a clinical trial. Estimation of the trial duration is a question of paramount interest. Most techniques are based on deterministic models and various ad hoc methods neglecting the variability in the recruitment process. To overpass this difficulty the so-called Poisson-gamma model has been introduced involving, for each centre, a recruitment process modelled by a Poisson process whose rate is assumed constant in time and gamma-distributed. The relevancy of this model has been widely investigated. In practice, rates are rarely constant in time, there are breaks in recruitment (for instance week-ends or holidays). Such information can be collected and included in a model considering piecewise constant rate functions yielding to an inhomogeneous Cox model. The estimation of the trial duration is much more difficult. Three strategies of computation of the expected trial duration are proposed considering all the breaks, considering only large breaks and without considering breaks. The bias of these estimations procedure are assessed by means of simulation studies considering three scenarios of breaks simulation. These strategies yield to estimations with a very small bias. Moreover, the strategy with the best performances in terms of prediction and with the smallest bias is the one which does not take into account of breaks. This result is important as, in practice, collecting breaks data is pretty hard to manage.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.