Sample records for size probability distribution

  1. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  2. Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?

    PubMed Central

    Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.

    2013-01-01

    Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140

  3. Does litter size variation affect models of terrestrial carnivore extinction risk and management?

    PubMed

    Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A

    2013-01-01

    Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.

  4. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  6. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  7. Fragment size distribution in viscous bag breakup of a drop

    NASA Astrophysics Data System (ADS)

    Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.

    2015-11-01

    In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.

  8. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  9. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  10. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  11. Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.; Bailey, M. E.

    2002-11-01

    We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.

  12. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  13. Effects of Vertex Activity and Self-organized Criticality Behavior on a Weighted Evolving Network

    NASA Astrophysics Data System (ADS)

    Zhang, Gui-Qing; Yang, Qiu-Ying; Chen, Tian-Lun

    2008-08-01

    Effects of vertex activity have been analyzed on a weighted evolving network. The network is characterized by the probability distribution of vertex strength, each edge weight and evolution of the strength of vertices with different vertex activities. The model exhibits self-organized criticality behavior. The probability distribution of avalanche size for different network sizes is also shown. In addition, there is a power law relation between the size and the duration of an avalanche and the average of avalanche size has been studied for different vertex activities.

  14. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  15. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  16. General formulation of long-range degree correlations in complex networks

    NASA Astrophysics Data System (ADS)

    Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke

    2018-06-01

    We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.

  17. Second look at the spread of epidemics on networks

    NASA Astrophysics Data System (ADS)

    Kenah, Eben; Robins, James M.

    2007-09-01

    In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.

  18. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  19. Deviation from Power Law Behavior in Landslide Phenomenon

    NASA Astrophysics Data System (ADS)

    Li, L.; Lan, H.; Wu, Y.

    2013-12-01

    Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.

  20. Size Effect on Specific Energy Distribution in Particle Comminution

    NASA Astrophysics Data System (ADS)

    Xu, Yongfu; Wang, Yidong

    A theoretical study is made to derive an energy distribution equation for the size reduction process from the fractal model for the particle comminution. Fractal model is employed as a valid measure of the self-similar size distribution of comminution daughter products. The tensile strength of particles varies with particle size in the manner of a power function law. The energy consumption for comminuting single particle is found to be proportional to the 5(D-3)/3rd order of the particle size, D being the fractal dimension of particle comminution daughter. The Weibull statistics is applied to describe the relationship between the breakage probability and specific energy of particle comminution. A simple equation is derived for the breakage probability of particles in view of the dependence of fracture energy on particle size. The calculated exponents and Weibull coefficients are generally in conformity with published data for fracture of particles.

  1. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  2. The effect of microscopic friction and size distributions on conditional probability distributions in soft particle packings

    NASA Astrophysics Data System (ADS)

    Saitoh, Kuniyasu; Magnanimo, Vanessa; Luding, Stefan

    2017-10-01

    Employing two-dimensional molecular dynamics (MD) simulations of soft particles, we study their non-affine responses to quasi-static isotropic compression where the effects of microscopic friction between the particles in contact and particle size distributions are examined. To quantify complicated restructuring of force-chain networks under isotropic compression, we introduce the conditional probability distributions (CPDs) of particle overlaps such that a master equation for distribution of overlaps in the soft particle packings can be constructed. From our MD simulations, we observe that the CPDs are well described by q-Gaussian distributions, where we find that the correlation for the evolution of particle overlaps is suppressed by microscopic friction, while it significantly increases with the increase of poly-dispersity.

  3. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  4. Dealing with non-unique and non-monotonic response in particle sizing instruments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil

    2017-04-01

    A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.

  5. The geometry of proliferating dicot cells.

    PubMed

    Korn, R W

    2001-02-01

    The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.

  6. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  7. Estimation and applications of size-based distributions in forestry

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...

  8. Estimation and applications of size-biased distributions in forestry

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Size-biased distributions arise naturally in several contexts in forestry and ecology. Simple power relationships (e.g. basal area and diameter at breast height) between variables are one such area of interest arising from a modelling perspective. Another, probability proportional to size PPS) sampling, is found in the most widely used methods for sampling standing or...

  9. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  10. Self-imposed length limits in recreational fisheries

    USGS Publications Warehouse

    Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.

    2014-01-01

    A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.

  11. Unified nano-mechanics based probabilistic theory of quasibrittle and brittle structures: I. Strength, static crack growth, lifetime and scaling

    NASA Astrophysics Data System (ADS)

    Le, Jia-Liang; Bažant, Zdeněk P.; Bazant, Martin Z.

    2011-07-01

    Engineering structures must be designed for an extremely low failure probability such as 10 -6, which is beyond the means of direct verification by histogram testing. This is not a problem for brittle or ductile materials because the type of probability distribution of structural strength is fixed and known, making it possible to predict the tail probabilities from the mean and variance. It is a problem, though, for quasibrittle materials for which the type of strength distribution transitions from Gaussian to Weibullian as the structure size increases. These are heterogeneous materials with brittle constituents, characterized by material inhomogeneities that are not negligible compared to the structure size. Examples include concrete, fiber composites, coarse-grained or toughened ceramics, rocks, sea ice, rigid foams and bone, as well as many materials used in nano- and microscale devices. This study presents a unified theory of strength and lifetime for such materials, based on activation energy controlled random jumps of the nano-crack front, and on the nano-macro multiscale transition of tail probabilities. Part I of this study deals with the case of monotonic and sustained (or creep) loading, and Part II with fatigue (or cyclic) loading. On the scale of the representative volume element of material, the probability distribution of strength has a Gaussian core onto which a remote Weibull tail is grafted at failure probability of the order of 10 -3. With increasing structure size, the Weibull tail penetrates into the Gaussian core. The probability distribution of static (creep) lifetime is related to the strength distribution by the power law for the static crack growth rate, for which a physical justification is given. The present theory yields a simple relation between the exponent of this law and the Weibull moduli for strength and lifetime. The benefit is that the lifetime distribution can be predicted from short-time tests of the mean size effect on strength and tests of the power law for the crack growth rate. The theory is shown to match closely numerous test data on strength and static lifetime of ceramics and concrete, and explains why their histograms deviate systematically from the straight line in Weibull scale. Although the present unified theory is built on several previous advances, new contributions are here made to address: (i) a crack in a disordered nano-structure (such as that of hydrated Portland cement), (ii) tail probability of a fiber bundle (or parallel coupling) model with softening elements, (iii) convergence of this model to the Gaussian distribution, (iv) the stress-life curve under constant load, and (v) a detailed random walk analysis of crack front jumps in an atomic lattice. The nonlocal behavior is captured in the present theory through the finiteness of the number of links in the weakest-link model, which explains why the mean size effect coincides with that of the previously formulated nonlocal Weibull theory. Brittle structures correspond to the large-size limit of the present theory. An important practical conclusion is that the safety factors for strength and tolerable minimum lifetime for large quasibrittle structures (e.g., concrete structures and composite airframes or ship hulls, as well as various micro-devices) should be calculated as a function of structure size and geometry.

  12. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  13. The Statistics of Urban Scaling and Their Connection to Zipf’s Law

    PubMed Central

    Gomez-Lievano, Andres; Youn, HyeJin; Bettencourt, Luís M. A.

    2012-01-01

    Urban scaling relations characterizing how diverse properties of cities vary on average with their population size have recently been shown to be a general quantitative property of many urban systems around the world. However, in previous studies the statistics of urban indicators were not analyzed in detail, raising important questions about the full characterization of urban properties and how scaling relations may emerge in these larger contexts. Here, we build a self-consistent statistical framework that characterizes the joint probability distributions of urban indicators and city population sizes across an urban system. To develop this framework empirically we use one of the most granular and stochastic urban indicators available, specifically measuring homicides in cities of Brazil, Colombia and Mexico, three nations with high and fast changing rates of violent crime. We use these data to derive the conditional probability of the number of homicides per year given the population size of a city. To do this we use Bayes’ rule together with the estimated conditional probability of city size given their number of homicides and the distribution of total homicides. We then show that scaling laws emerge as expectation values of these conditional statistics. Knowledge of these distributions implies, in turn, a relationship between scaling and population size distribution exponents that can be used to predict Zipf’s exponent from urban indicator statistics. Our results also suggest how a general statistical theory of urban indicators may be constructed from the stochastic dynamics of social interaction processes in cities. PMID:22815745

  14. Estimating probable flaw distributions in PWR steam generator tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regardingmore » uncertainties and assumptions in the data and analyses.« less

  15. Voronoi Cell Patterns: theoretical model and application to submonolayer growth

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2012-02-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.

  16. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  17. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  18. A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Matney, Mark J.

    2008-01-01

    We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.

  19. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  20. A Comparison of the Exact Kruskal-Wallis Distribution to Asymptotic Approximations for All Sample Sizes up to 105

    ERIC Educational Resources Information Center

    Meyer, J. Patrick; Seaman, Michael A.

    2013-01-01

    The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…

  1. Slip-Size Distribution and Self-Organized Criticality in Block-Spring Models with Quenched Randomness

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Kadowaki, Shuntaro

    2017-07-01

    We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.

  2. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  3. Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials

    NASA Astrophysics Data System (ADS)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.

  4. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    PubMed

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  5. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  6. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  7. Theoretical calculation of the cratering on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.

    2003-07-01

    The main influences on crater size distributions are investigated by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY numerical integrator. An efficient, Öpik-type, collision code enables the distribution of impact velocities and the overall impact probability to be found. When combined with a crater scaling law and an impactor size distribution, using a Monte Carlo method, this yields a crater size distribution. The cratering time-scale is longer for Ida than either Gaspra or Mathilde, though it is harder to constrain for Eros due to the chaotic variation of its orbital elements. The slopes of the crater size distribution are in accord with observations.

  8. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  9. Nature of the Martian surface as inferred from the particle-size distribution of lunar-surface material.

    NASA Technical Reports Server (NTRS)

    Mason, C. C.

    1971-01-01

    Analysis of lunar particle size distribution data indicates that the surface material is composed of two populations. One population is caused by comminution from the impact of the larger-sized meteorites, while the other population is caused by the melting of fine material by the impact of smaller-sized meteorites. The results are referred to Mars, and it is shown that the Martian atmosphere would vaporize the smaller incoming meteorites and retard the incoming meteorites of intermediate and large size, causing comminution and stirring of the particulate layer. The combination of comminution and stirring would result in fine material being sorted out by the prevailing circulation of the Martian atmosphere and the material being transported to regions where it could be deposited. As a result, the Martian surface in regions of prevailing upward circulation is probably covered by either a rubble layer or by desert pavement; regions of prevailing downward circulation are probably covered by sand dunes.

  10. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  11. Moments of catchment storm area

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Wang, Q.

    1985-01-01

    The portion of a catchment covered by a stationary rainstorm is modeled by the common area of two overlapping circles. Given that rain occurs within the catchment and conditioned by fixed storm and catchment sizes, the first two moments of the distribution of the common area are derived from purely geometrical considerations. The variance of the wetted fraction is shown to peak when the catchment size is equal to the size of the predominant storm. The conditioning on storm size is removed by assuming a probability distribution based upon the observed fractal behavior of cloud and rainstorm areas.

  12. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  13. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  14. Measures for a multidimensional multiverse

    NASA Astrophysics Data System (ADS)

    Chung, Hyeyoun

    2015-04-01

    We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.

  15. The Experiment of the Clog Reduction in a Plane Silo

    NASA Astrophysics Data System (ADS)

    Sun, Ai-Le; Zhang, Jie

    2017-06-01

    The flow of particles may be clogged when they pass through a narrow orifice. Many factors can change the probability of clogging, such as the outlet size, the presence of obstacles and external perturbation, but the detailed mechanisms are still unclear. In this paper, we present an experimental study of reduction of the clogging probability in a horizontal plane silo, which consists of a layer of elastic particles transported on an annular flat plate rotating with a constant angular velocity passing through a hopper structure. We found the exponential distributions of the avalanche size for different sizes of orifice and the power law tails of the passing time between two particles. We did not confirm whether there was a critical size of orifice above which the clogging became impossible. We explored the effect of the obstacle on the probability of clogging: and if we chose a proper obstacle placed at a proper position, the probability of clogging could be reduced by a factor of about seven.

  16. United States Geological Survey fire science: fire danger monitoring and forecasting

    USGS Publications Warehouse

    Eidenshink, Jeff C.; Howard, Stephen M.

    2012-01-01

    Each day, the U.S. Geological Survey produces 7-day forecasts for all Federal lands of the distributions of number of ignitions, number of fires above a given size, and conditional probabilities of fires growing larger than a specified size. The large fire probability map is an estimate of the likelihood that ignitions will become large fires. The large fire forecast map is a probability estimate of the number of fires on federal lands exceeding 100 acres in the forthcoming week. The ignition forecast map is a probability estimate of the number of fires on Federal land greater than 1 acre in the forthcoming week. The extreme event forecast is the probability estimate of the number of fires on Federal land that may exceed 5,000 acres in the forthcoming week.

  17. The ranking probability approach and its usage in design and analysis of large-scale studies.

    PubMed

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  18. Distribution, characterization, and exposure of MC252 oil in the supratidal beach environment.

    PubMed

    Lemelle, Kendall R; Elango, Vijaikrishnah; Pardue, John H

    2014-07-01

    The distribution and characteristics of MC252 oil:sand aggregates, termed surface residue balls (SRBs), were measured on the supratidal beach environment of oil-impacted Fourchon Beach in Louisiana (USA). Probability distributions of 4 variables, surface coverage (%), size of SRBs (mm(2) of projected area), mass of SRBs per m(2) (g/m(2)), and concentrations of polycyclic aromatic hydrocarbons (PAHs) and n-alkanes in the SRBs (mg of crude oil component per kg of SRB) were determined using parametric and nonparametric statistical techniques. Surface coverage of SRBs, an operational remedial standard for the beach surface, was a gamma-distributed variable ranging from 0.01% to 8.1%. The SRB sizes had a mean of 90.7 mm(2) but fit no probability distribution, and a nonparametric ranking was used to describe the size distributions. Concentrations of total PAHs ranged from 2.5 mg/kg to 126 mg/kg of SRB. Individual PAH concentration distributions, consisting primarily of alkylated phenanthrenes, dibenzothiophenes, and chrysenes, did not consistently fit a parametric distribution. Surface coverage was correlated with an oil mass per unit area but with a substantial error at lower coverage (i.e., <2%). These data provide probabilistic risk assessors with the ability to specify uncertainty in PAH concentration, exposure frequency, and ingestion rate, based on SRB characteristics for the dominant oil form on beaches along the US Gulf Coast. © 2014 SETAC.

  19. N-mixture models for estimating population size from spatially replicated counts

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.

  20. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  1. Changes in tropical precipitation cluster size distributions under global warming

    NASA Astrophysics Data System (ADS)

    Neelin, J. D.; Quinn, K. M.

    2016-12-01

    The total amount of precipitation integrated across a tropical storm or other precipitation feature (contiguous clusters of precipitation exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance. To establish baseline behavior in current climate, the probability distribution of cluster sizes from multiple satellite retrievals and National Center for Environmental Prediction (NCEP) reanalysis is compared to those from Coupled Model Intercomparison Project (CMIP5) models and the Geophysical Fluid Dynamics Laboratory high-resolution atmospheric model (HIRAM-360 and -180). With the caveat that a minimum rain rate threshold is important in the models (which tend to overproduce low rain rates), the models agree well with observations in leading properties. In particular, scale-free power law ranges in which the probability drops slowly with increasing cluster size are well modeled, followed by a rapid drop in probability of the largest clusters above a cutoff scale. Under the RCP 8.5 global warming scenario, the models indicate substantial increases in probability (up to an order of magnitude) of the largest clusters by the end of century. For models with continuous time series of high resolution output, there is substantial spread on when these probability increases for the largest precipitation clusters should be detectable, ranging from detectable within the observational period to statistically significant trends emerging only in the second half of the century. Examination of NCEP reanalysis and SSMI/SSMIS series of satellite retrievals from 1979 to present does not yield reliable evidence of trends at this time. The results suggest improvements in inter-satellite calibration of the SSMI/SSMIS retrievals could aid future detection.

  2. Anomalous finite-size effects in the Battle of the Sexes

    NASA Astrophysics Data System (ADS)

    Cremer, J.; Reichenbach, T.; Frey, E.

    2008-06-01

    The Battle of the Sexes describes asymmetric conflicts in mating behavior of males and females. Males can be philanderer or faithful, while females are either fast or coy, leading to a cyclic dynamics. The adjusted replicator equation predicts stable coexistence of all four strategies. In this situation, we consider the effects of fluctuations stemming from a finite population size. We show that they unavoidably lead to extinction of two strategies in the population. However, the typical time until extinction occurs strongly prolongs with increasing system size. In the emerging time window, a quasi-stationary probability distribution forms that is anomalously flat in the vicinity of the coexistence state. This behavior originates in a vanishing linear deterministic drift near the fixed point. We provide numerical data as well as an analytical approach to the mean extinction time and the quasi-stationary probability distribution.

  3. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    USGS Publications Warehouse

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  4. Transient queue-size distribution in a finite-capacity queueing system with server breakdowns and Bernoulli feedback

    NASA Astrophysics Data System (ADS)

    Kempa, Wojciech M.

    2017-12-01

    A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.

  5. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  6. Sizing aerosolized fractal nanoparticle aggregates through Bayesian analysis of wide-angle light scattering (WALS) data

    NASA Astrophysics Data System (ADS)

    Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.

    2016-11-01

    Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.

  7. Sampling--how big a sample?

    PubMed

    Aitken, C G

    1999-07-01

    It is thought that, in a consignment of discrete units, a certain proportion of the units contain illegal material. A sample of the consignment is to be inspected. Various methods for the determination of the sample size are compared. The consignment will be considered as a random sample from some super-population of units, a certain proportion of which contain drugs. For large consignments, a probability distribution, known as the beta distribution, for the proportion of the consignment which contains illegal material is obtained. This distribution is based on prior beliefs about the proportion. Under certain specific conditions the beta distribution gives the same numerical results as an approach based on the binomial distribution. The binomial distribution provides a probability for the number of units in a sample which contain illegal material, conditional on knowing the proportion of the consignment which contains illegal material. This is in contrast to the beta distribution which provides probabilities for the proportion of a consignment which contains illegal material, conditional on knowing the number of units in the sample which contain illegal material. The interpretation when the beta distribution is used is much more intuitively satisfactory. It is also much more flexible in its ability to cater for prior beliefs which may vary given the different circumstances of different crimes. For small consignments, a distribution, known as the beta-binomial distribution, for the number of units in the consignment which are found to contain illegal material, is obtained, based on prior beliefs about the number of units in the consignment which are thought to contain illegal material. As with the beta and binomial distributions for large samples, it is shown that, in certain specific conditions, the beta-binomial and hypergeometric distributions give the same numerical results. However, the beta-binomial distribution, as with the beta distribution, has a more intuitively satisfactory interpretation and greater flexibility. The beta and the beta-binomial distributions provide methods for the determination of the minimum sample size to be taken from a consignment in order to satisfy a certain criterion. The criterion requires the specification of a proportion and a probability.

  8. Measurement of droplet size distribution in core region of high-speed spray by micro-probe L2F

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Daisaku; Le Amida, Oluwo; Ueki, Hironobu; Ishida, Masahiro

    2008-03-01

    In order to investigate the distribution of droplet sizes in the core region of diesel fuel spray, instantaneous measurement of droplet sizes was conducted by an advanced laser 2-focus velocimeter (L2F). The micro-scale probe of the L2F is made up of two foci and the distance between them is 36 µm. The tested nozzle had a 0.2 mm diameter single-hole. The measurements of injection pressure, needle lift, and crank angle were synchronized with the measurement by the L2F at the position 10 mm downstream from the nozzle exit. It is clearly shown that the droplet near the spray axis is larger than that in the off-axis region under the needle full lift condition and that the spatial distribution of droplet sizes varies temporally. It is found that the probability density distribution of droplet sizes in the spray core region can be fitted to the Nukiyama-Tanasawa distribution in most injection periods.

  9. Studying the Binomial Distribution Using LabVIEW

    ERIC Educational Resources Information Center

    George, Danielle J.; Hammer, Nathan I.

    2015-01-01

    This undergraduate physical chemistry laboratory exercise introduces students to the study of probability distributions both experimentally and using computer simulations. Students perform the classic coin toss experiment individually and then pool all of their data together to study the effect of experimental sample size on the binomial…

  10. Synthesis and characterization of magnetic poly(divinyl benzene)/Fe3O4, C/Fe3O4/Fe, and C/Fe onionlike fullerene micrometer-sized particles with a narrow size distribution.

    PubMed

    Snovski, Ron; Grinblat, Judith; Margel, Shlomo

    2011-09-06

    Magnetic poly(divinyl benzene)/Fe(3)O(4) microspheres with a narrow size distribution were produced by entrapping the iron pentacarbonyl precursor within the pores of uniform porous poly(divinyl benzene) microspheres prepared in our laboratory, followed by the decomposition in a sealed cell of the entrapped Fe(CO)(5) particles at 300 °C under an inert atmosphere. Magnetic onionlike fullerene microspheres with a narrow size distribution were produced by annealing the obtained PDVB/Fe(3)O(4) particles at 500, 600, 800, and 1100 °C, respectively, under an inert atmosphere. The formation of carbon graphitic layers at low temperatures such as 500 °C is unique and probably obtained because of the presence of the magnetic iron nanoparticles. The annealing temperature allowed control of the composition, size, size distribution, crystallinity, porosity, and magnetic properties of the produced magnetic microspheres. © 2011 American Chemical Society

  11. Finite-size scaling of survival probability in branching processes

    NASA Astrophysics Data System (ADS)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  12. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  13. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  14. COAGULATION CALCULATIONS OF ICY PLANET FORMATION AT 15-150 AU: A CORRELATION BETWEEN THE MAXIMUM RADIUS AND THE SLOPE OF THE SIZE DISTRIBUTION FOR TRANS-NEPTUNIAN OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu

    2012-03-15

    We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less

  15. Maximizing Information Diffusion in the Cyber-physical Integrated Network †

    PubMed Central

    Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan

    2015-01-01

    Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254

  16. Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction

    NASA Astrophysics Data System (ADS)

    Ocampo, Manuel S.; Caicedo, Bernardo

    2009-06-01

    Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.

  17. A multimodal detection model of dolphins to estimate abundance validated by field experiments.

    PubMed

    Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko

    2013-09-01

    Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.

  18. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    NASA Astrophysics Data System (ADS)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  19. Effects of resource-dependent cannibalism on population size distribution and individual life history in a case-bearing caddisfly

    PubMed Central

    Okuda, Noboru

    2018-01-01

    Resource availability often determines the intensity of cannibalism, which has a considerable effect on population size distribution and individual life history. Larvae of the caddisfly Psilotreta kisoensis build portable cases from sedimentary sands and often display cannibalism. For this species, the availability of preferable case material is a critical factor that affects larval fitness, and material is locally variable depending on the underlying geology. In this study, we investigated how sand quality as a case material determines cannibalism frequency among larvae and, in turn, how the differential cannibalism frequency affects the body-size distribution and voltinism. Rearing experiments within a cohort revealed that a bimodal size distribution developed regardless of material quality. However, as the preferable material became abundant, the proportion of larger to smaller individuals increased. Consecutive experiments suggested that smaller larvae were more frequently cannibalized by larger ones and excluded from the population when preferable smooth material was abundant. This frequent cannibalism resulted in a bimodal size distribution with a significantly higher proportion of larger compared to smaller individuals. The size-dependent cannibalism was significantly suppressed when the larvae were raised in an environment with a scarcity of the preferable case material. This is probably because larvae cannot enjoy the benefit of rapid growth by cannibalism due to the difficulties in enlarging their case. At low cannibalism the growth of smaller individuals was stunted, and this was probably due to risk of cannibalism by larger individuals. This growth reduction in small individuals led to a bimodal size-distribution but with a lower proportion of larger to smaller individuals compared to at high cannibalism. A field study in two streams showed a similar size distribution of larvae as was found in the rearing experiment. The bimodal ratio has consequences for life history, since a size-bimodal population causes a cohort splitting: only larvae that were fully grown at 1 year had a univoltine life cycle, whereas larvae with a stunted growth continued their larval life for another year (semivoltine). This study suggests that availability of preferable case building material is an important factor that affects cannibalism, which in turn affects larval population size structure and cohort splitting. PMID:29466375

  20. Effects of resource-dependent cannibalism on population size distribution and individual life history in a case-bearing caddisfly.

    PubMed

    Okano, Jun-Ichi; Okuda, Noboru

    2018-01-01

    Resource availability often determines the intensity of cannibalism, which has a considerable effect on population size distribution and individual life history. Larvae of the caddisfly Psilotreta kisoensis build portable cases from sedimentary sands and often display cannibalism. For this species, the availability of preferable case material is a critical factor that affects larval fitness, and material is locally variable depending on the underlying geology. In this study, we investigated how sand quality as a case material determines cannibalism frequency among larvae and, in turn, how the differential cannibalism frequency affects the body-size distribution and voltinism. Rearing experiments within a cohort revealed that a bimodal size distribution developed regardless of material quality. However, as the preferable material became abundant, the proportion of larger to smaller individuals increased. Consecutive experiments suggested that smaller larvae were more frequently cannibalized by larger ones and excluded from the population when preferable smooth material was abundant. This frequent cannibalism resulted in a bimodal size distribution with a significantly higher proportion of larger compared to smaller individuals. The size-dependent cannibalism was significantly suppressed when the larvae were raised in an environment with a scarcity of the preferable case material. This is probably because larvae cannot enjoy the benefit of rapid growth by cannibalism due to the difficulties in enlarging their case. At low cannibalism the growth of smaller individuals was stunted, and this was probably due to risk of cannibalism by larger individuals. This growth reduction in small individuals led to a bimodal size-distribution but with a lower proportion of larger to smaller individuals compared to at high cannibalism. A field study in two streams showed a similar size distribution of larvae as was found in the rearing experiment. The bimodal ratio has consequences for life history, since a size-bimodal population causes a cohort splitting: only larvae that were fully grown at 1 year had a univoltine life cycle, whereas larvae with a stunted growth continued their larval life for another year (semivoltine). This study suggests that availability of preferable case building material is an important factor that affects cannibalism, which in turn affects larval population size structure and cohort splitting.

  1. Steady-state distributions of probability fluxes on complex networks

    NASA Astrophysics Data System (ADS)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  2. Probability of lensing magnification by cosmologically distributed galaxies

    NASA Technical Reports Server (NTRS)

    Pei, Yichuan C.

    1993-01-01

    We present the analytical formulae for computing the magnification probability caused by cosmologically distributed galaxies. The galaxies are assumed to be singular, truncated-isothermal spheres without both evolution and clustering in redshift. We find that, for a fixed total mass, extended galaxies produce a broader shape in the magnification probability distribution and hence are less efficient as gravitational lenses than compact galaxies. The high-magnification tail caused by large galaxies is well approximated by an A exp -3 form, while the tail by small galaxies is slightly shallower. The mean magnification as a function of redshift is, however, found to be independent of the size of the lensing galaxies. In terms of the flux conservation, our formulae for the isothermal galaxy model predict a mean magnification to within a few percent with the Dyer-Roeder model of a clumpy universe.

  3. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  4. Continental-scale simulation of burn probabilities, flame lengths, and fire size distribution for the United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac Grenfell; Karin L. Riley

    2010-01-01

    Components of a quantitative risk assessment were produced by simulation of burn probabilities and fire behavior variation for 134 fire planning units (FPUs) across the continental U.S. The system uses fire growth simulation of ignitions modeled from relationships between large fire occurrence and the fire danger index Energy Release Component (ERC). Simulations of 10,...

  5. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  6. Measuring droplet size distributions from overlapping interferometric particle images.

    PubMed

    Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem

    2015-02-01

    Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.

  7. Toward a comprehensive theory for the sweeping of trapped radiation by inert orbiting matter

    NASA Technical Reports Server (NTRS)

    Fillius, Walker

    1988-01-01

    There is a need to calculate loss rates when trapped Van Allen radiation encounters inert orbiting material such as planetary rings and satellites. An analytic expression for the probability of a hit in a bounce encounter is available for all cases where the absorber is spherical and the particles are gyrotropically distributed on a cylindrical flux tube. The hit probability is a function of the particle's pitch angle, the size of the absorber, and the distance between the flux tube and the absorber when distances are scaled to the gyroradius of a particle moving perpendicular to the magnetic field. Using this expression, hit probabilities in drift encounters were computed for all regimes of particle energies and absorber sizes.

  8. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  9. Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift

    PubMed Central

    Zhao, Lei; Yue, Xingye; Waxman, David

    2013-01-01

    A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318

  10. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  11. Shallow slip amplification and enhanced tsunami hazard unravelled by dynamic simulations of mega-thrust earthquakes

    PubMed Central

    Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.

    2016-01-01

    The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733

  12. Safe leads and lead changes in competitive team sports.

    PubMed

    Clauset, A; Kogan, M; Redner, S

    2015-06-01

    We investigate the time evolution of lead changes within individual games of competitive team sports. Exploiting ideas from the theory of random walks, the number of lead changes within a single game follows a Gaussian distribution. We show that the probability that the last lead change and the time of the largest lead size are governed by the same arcsine law, a bimodal distribution that diverges at the start and at the end of the game. We also determine the probability that a given lead is "safe" as a function of its size L and game time t. Our predictions generally agree with comprehensive data on more than 1.25 million scoring events in roughly 40,000 games across four professional or semiprofessional team sports, and are more accurate than popular heuristics currently used in sports analytics.

  13. Safe leads and lead changes in competitive team sports

    NASA Astrophysics Data System (ADS)

    Clauset, A.; Kogan, M.; Redner, S.

    2015-06-01

    We investigate the time evolution of lead changes within individual games of competitive team sports. Exploiting ideas from the theory of random walks, the number of lead changes within a single game follows a Gaussian distribution. We show that the probability that the last lead change and the time of the largest lead size are governed by the same arcsine law, a bimodal distribution that diverges at the start and at the end of the game. We also determine the probability that a given lead is "safe" as a function of its size L and game time t . Our predictions generally agree with comprehensive data on more than 1.25 million scoring events in roughly 40 000 games across four professional or semiprofessional team sports, and are more accurate than popular heuristics currently used in sports analytics.

  14. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  15. Flow of foams in two-dimensional disordered porous media

    NASA Astrophysics Data System (ADS)

    Dollet, Benjamin; Geraud, Baudouin; Jones, Sian A.; Meheust, Yves; Cantat, Isabelle; Institut de Physique de Rennes Team; Geosciences Rennes Team

    2015-11-01

    Liquid foams are a yield stress fluid with elastic properties. When a foam flow is confined by solid walls, viscous dissipation arises from the contact zones between soap films and walls, giving very peculiar friction laws. In particular, foams potentially invade narrow pores much more efficiently than Newtonian fluids, which is of great importance for enhanced oil recovery. To quantify this effect, we study experimentally flows of foam in a model two-dimensional porous medium, consisting of an assembly of circular obstacles placed randomly in a Hele-Shaw cell, and use image analysis to quantify foam flow at the local scale. We show that bubbles split as they flow through the porous medium, by a mechanism of film pinching during contact with an obstacle, yielding two daughter bubbles per split bubble. We quantify the evolution of the bubble size distribution as a function of the distance along the porous medium, the splitting probability as a function of bubble size, and the probability distribution function of the daughter bubbles. We propose an evolution equation to model this splitting phenomenon and compare it successfully to the experiments, showing how at long distance, the porous medium itself dictates the size distribution of the foam.

  16. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  17. Discrete hierarchy of sizes and performances in the exchange-traded fund universe

    NASA Astrophysics Data System (ADS)

    Vandermarliere, B.; Ryckebusch, J.; Schoors, K.; Cauwels, P.; Sornette, D.

    2017-03-01

    Using detailed statistical analyses of the size distribution of a universe of equity exchange-traded funds (ETFs), we discover a discrete hierarchy of sizes, which imprints a log-periodic structure on the probability distribution of ETF sizes that dominates the details of the asymptotic tail. This allows us to propose a classification of the studied universe of ETFs into seven size layers approximately organized according to a multiplicative ratio of 3.5 in their total market capitalization. Introducing a similarity metric generalizing the Herfindhal index, we find that the largest ETFs exhibit a significantly stronger intra-layer and inter-layer similarity compared with the smaller ETFs. Comparing the performance across the seven discerned ETF size layers, we find an inverse size effect, namely large ETFs perform significantly better than the small ones both in 2014 and 2015.

  18. Size, sounds and sex: interactions between body size and harmonic convergence signals determine mating success in Aedes aegypti.

    PubMed

    Cator, Lauren J; Zanti, Zacharo

    2016-12-01

    Several new mosquito control strategies will involve the release of laboratory reared males which will be required to compete with wild males for mates. Currently, the determinants of male mating success remain unclear. The presence of convergence between male and female harmonic flight tone frequencies during a mating attempt have been found to increase male mating success in the yellow fever mosquito, Aedes aegypti. Size has also been implicated as a factor in male mating success. Here, we investigated the relationships among body size, harmonic convergence signalling, and mating success. We predicted that harmonic convergence would be an important determinant of mating success and that large individuals would be more likely to converge. We used diet to manipulate male and female body size and then measured acoustic interactions during mating attempts between pairs of different body sizes. Additionally, we used playback experiments to measure the direct effect of size on signalling performance. In live pair interactions, harmonic convergence was found to be a significant predictor of copula formation. However, we also found interactions between harmonic convergence behaviour and body size. The probability that a given male successfully formed a copula was a consequence of his size, the size of the female encountered, and whether or not they converged. While convergence appears to be predictive of mating success regardless of size, the positive effect of convergence was modulated by size combinations. In playbacks, adult body size did not affect the probability of harmonic convergence responses. Both body size and harmonic convergence signalling were found to be determinants of male mating success. Our results suggest that in addition to measuring convergence ability of mass release lines that the size distribution of released males may need to be adjusted to complement the size distribution of females. We also found that diet amount alone cannot be used to increase male mating success or convergence probability. A clearer understanding of convergence behaviours, their relationship to mating success, and factors influencing convergence ability would provide the groundwork for improving the mating performance of laboratory reared lines.

  19. Comparison of two probability distributions used to model sizes of undiscovered oil and gas accumulations: Does the tail wag the assessment?

    USGS Publications Warehouse

    Attanasi, E.D.; Charpentier, R.R.

    2002-01-01

    Undiscovered oil and gas assessments are commonly reported as aggregate estimates of hydrocarbon volumes. Potential commercial value and discovery costs are, however, determined by accumulation size, so engineers, economists, decision makers, and sometimes policy analysts are most interested in projected discovery sizes. The lognormal and Pareto distributions have been used to model exploration target sizes. This note contrasts the outcomes of applying these alternative distributions to the play level assessments of the U.S. Geological Survey's 1995 National Oil and Gas Assessment. Using the same numbers of undiscovered accumulations and the same minimum, medium, and maximum size estimates, substitution of the shifted truncated lognormal distribution for the shifted truncated Pareto distribution reduced assessed undiscovered oil by 16% and gas by 15%. Nearly all of the volume differences resulted because the lognormal had fewer larger fields relative to the Pareto. The lognormal also resulted in a smaller number of small fields relative to the Pareto. For the Permian Basin case study presented here, reserve addition costs were 20% higher with the lognormal size assumption. ?? 2002 International Association for Mathematical Geology.

  20. Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal

    NASA Astrophysics Data System (ADS)

    Ahmed, Suhad; Qasim, Zainab

    2018-05-01

    This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).

  1. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Influence of pore structure on compressive strength of cement mortar.

    PubMed

    Zhao, Haitao; Xiao, Qi; Huang, Donghui; Zhang, Shiping

    2014-01-01

    This paper describes an experimental investigation into the pore structure of cement mortar using mercury porosimeter. Ordinary Portland cement, manufactured sand, and natural sand were used. The porosity of the manufactured sand mortar is higher than that of natural sand at the same mix proportion; on the contrary, the probable pore size and threshold radius of manufactured sand mortar are finer. Besides, the probable pore size and threshold radius increased with increasing water to cement ratio and sand to cement ratio. In addition, the existing models of pore size distribution of cement-based materials have been reviewed and compared with test results in this paper. Finally, the extended Bhattacharjee model was built to examine the relationship between compressive strength and pore structure.

  3. Influence of Pore Structure on Compressive Strength of Cement Mortar

    PubMed Central

    Zhao, Haitao; Xiao, Qi; Huang, Donghui

    2014-01-01

    This paper describes an experimental investigation into the pore structure of cement mortar using mercury porosimeter. Ordinary Portland cement, manufactured sand, and natural sand were used. The porosity of the manufactured sand mortar is higher than that of natural sand at the same mix proportion; on the contrary, the probable pore size and threshold radius of manufactured sand mortar are finer. Besides, the probable pore size and threshold radius increased with increasing water to cement ratio and sand to cement ratio. In addition, the existing models of pore size distribution of cement-based materials have been reviewed and compared with test results in this paper. Finally, the extended Bhattacharjee model was built to examine the relationship between compressive strength and pore structure. PMID:24757414

  4. The minimum area requirements (MAR) for giant panda: an empirical study

    PubMed Central

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-01-01

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population’s long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive. PMID:27929520

  5. The minimum area requirements (MAR) for giant panda: an empirical study.

    PubMed

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-12-08

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.

  6. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  7. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  8. Unified nano-mechanics based probabilistic theory of quasibrittle and brittle structures: II. Fatigue crack growth, lifetime and scaling

    NASA Astrophysics Data System (ADS)

    Le, Jia-Liang; Bažant, Zdeněk P.

    2011-07-01

    This paper extends the theoretical framework presented in the preceding Part I to the lifetime distribution of quasibrittle structures failing at the fracture of one representative volume element under constant amplitude fatigue. The probability distribution of the critical stress amplitude is derived for a given number of cycles and a given minimum-to-maximum stress ratio. The physical mechanism underlying the Paris law for fatigue crack growth is explained under certain plausible assumptions about the damage accumulation in the cyclic fracture process zone at the tip of subcritical crack. This law is then used to relate the probability distribution of critical stress amplitude to the probability distribution of fatigue lifetime. The theory naturally yields a power-law relation for the stress-life curve (S-N curve), which agrees with Basquin's law. Furthermore, the theory indicates that, for quasibrittle structures, the S-N curve must be size dependent. Finally, physical explanation is provided to the experimentally observed systematic deviations of lifetime histograms of various ceramics and bones from the Weibull distribution, and their close fits by the present theory are demonstrated.

  9. Voronoi cell patterns: Theoretical model and applications

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  10. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  11. Invasion resistance arises in strongly interacting species-rich model competition communities.

    PubMed Central

    Case, T J

    1990-01-01

    I assemble stable multispecies Lotka-Volterra competition communities that differ in resident species number and average strength (and variance) of species interactions. These are then invaded with randomly constructed invaders drawn from the same distribution as the residents. The invasion success rate and the fate of the residents are determined as a function of community-and species-level properties. I show that the probability of colonization success for an invader decreases with community size and the average strength of competition (alpha). Communities composed of many strongly interacting species limit the invasion possibilities of most similar species. These communities, even for a superior invading competitor, set up a sort of "activation barrier" that repels invaders when they invade at low numbers. This "priority effect" for residents is not assumed a priori in my description for the individual population dynamics of these species; rather it emerges because species-rich and strongly interacting species sets have alternative stable states that tend to disfavor species at low densities. These models point to community-level rather than invader-level properties as the strongest determinant of differences in invasion success. The probability of extinction for a resident species increases with community size, and the probability of successful colonization by the invader decreases. Thus an equilibrium community size results wherein the probability of a resident species' extinction just balances the probability of an invader's addition. Given the distribution of alpha it is now possible to predict the equilibrium species number. The results provide a logical framework for an island-biogeographic theory in which species turnover is low even in the face of persistent invasions and for the protection of fragile native species from invading exotics. PMID:11607132

  12. Estimating alarm thresholds and the number of components in mixture distributions

    NASA Astrophysics Data System (ADS)

    Burr, Tom; Hamada, Michael S.

    2012-09-01

    Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.

  13. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  14. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  15. Cell-size distribution in epithelial tissue formation and homeostasis

    PubMed Central

    Primo, Luca; Celani, Antonio

    2017-01-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. PMID:28330988

  16. Cell-size distribution in epithelial tissue formation and homeostasis.

    PubMed

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. © 2017 The Author(s).

  17. Forecasting distributions of large federal-lands fires utilizing satellite and gridded weather information

    USGS Publications Warehouse

    Preisler, H.K.; Burgan, R.E.; Eidenshink, J.C.; Klaver, Jacqueline M.; Klaver, R.W.

    2009-01-01

    The current study presents a statistical model for assessing the skill of fire danger indices and for forecasting the distribution of the expected numbers of large fires over a given region and for the upcoming week. The procedure permits development of daily maps that forecast, for the forthcoming week and within federal lands, percentiles of the distributions of (i) number of ignitions; (ii) number of fires above a given size; (iii) conditional probabilities of fires greater than a specified size, given ignition. As an illustration, we used the methods to study the skill of the Fire Potential Index an index that incorporates satellite and surface observations to map fire potential at a national scale in forecasting distributions of large fires. ?? 2009 IAWF.

  18. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  19. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  20. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  1. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  2. K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents

    ERIC Educational Resources Information Center

    Gwanyama, Philip Wagala

    2005-01-01

    The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…

  3. A Performance Comparison on the Probability Plot Correlation Coefficient Test using Several Plotting Positions for GEV Distribution.

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng

    2014-05-01

    It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  4. Self-narrowing of size distributions of nanostructures by nucleation antibunching

    NASA Astrophysics Data System (ADS)

    Glas, Frank; Dubrovskii, Vladimir G.

    2017-08-01

    We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.

  5. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  6. Pasture size effects on the ability of off-stream water or restricted stream access to alter the spatial/temporal distribution of grazing beef cows.

    PubMed

    Bisinger, J J; Russell, J R; Morrical, D G; Isenhart, T M

    2014-08-01

    For 2 grazing seasons, effects of pasture size, stream access, and off-stream water on cow distribution relative to a stream were evaluated in six 12.1-ha cool-season grass pastures. Two pasture sizes (small [4.0 ha] and large [12.1 ha]) with 3 management treatments (unrestricted stream access without off-stream water [U], unrestricted stream access with off-stream water [UW], and stream access restricted to a stabilized stream crossing [R]) were alternated between pasture sizes every 2 wk for 5 consecutive 4-wk intervals in each grazing season. Small and large pastures were stocked with 5 and 15 August-calving cows from mid May through mid October. At 10-min intervals, cow location was determined with Global Positioning System collars fitted on 2 to 3 cows in each pasture and identified when observed in the stream (0-10 m from the stream) or riparian (0-33 m from the stream) zones and ambient temperature was recorded with on-site weather stations. Over all intervals, cows were observed more (P ≤ 0.01) frequently in the stream and riparian zones of small than large pastures regardless of management treatment. Cows in R pastures had 24 and 8% less (P < 0.01) observations in the stream and riparian zones than U or UW pastures regardless of pasture size. Off-stream water had little effect on the presence of cows in or near pasture streams regardless of pasture size. In 2011, the probability of cow presence in the stream and riparian zones increased at greater (P < 0.04) rates as ambient temperature increased in U and UW pastures than in 2010. As ambient temperature increased, the probability of cow presence in the stream and riparian zones increased at greater (P < 0.01) rates in small than large pastures. Across pasture sizes, the probability of cow presence in the stream and riparian zone increased less (P < 0.01) with increasing ambient temperatures in R than U and UW pastures. Rates of increase in the probability of cow presence in shade (within 10 m of tree drip lines) in the total pasture with increasing temperatures did not differ between treatments. However, probability of cow presence in riparian shade increased at greater (P < 0.01) rates in small than large pastures. Pasture size was a major factor affecting congregation of cows in or near pasture streams with unrestricted access.

  7. Shape of growth-rate distribution determines the type of Non-Gibrat’s Property

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki

    2011-11-01

    In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.

  8. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  9. The application of a linear algebra to the analysis of mutation rates.

    PubMed

    Jones, M E; Thomas, S M; Clarke, K

    1999-07-07

    Cells and bacteria growing in culture are subject to mutation, and as this mutation is the ultimate substrate for selection and evolution, the factors controlling the mutation rate are of some interest. The mutational event is not observed directly, but is inferred from the phenotype of the original mutant or of its descendants; the rate of mutation is inferred from the number of such mutant phenotypes. Such inference presumes a knowledge of the probability distribution for the size of a clone arising from a single mutation. We develop a mathematical formulation that assists in the design and analysis of experiments which investigate mutation rates and mutant clone size distribution, and we use it to analyse data for which the classical Luria-Delbrück clone-size distribution must be rejected. Copyright 1999 Academic Press.

  10. A population model for a long-lived, resprouting chaparral shrub: Adenostoma fasciculatum

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Rundel, Philip W.

    1986-01-01

    Extensive stands of Adenostoma fasciculatum H.&A. (chamise) in the chaparral of California are periodically rejuvenated by fire. A population model based on size-specific demographic characteristics (thinning and fire-caused mortality) was developed to generate probable age distributions within size classes and survivorship curves for typical stands. The model was modified to assess the long term effects of different mortality rates on age distributions. Under observed mean mortality rates (28.7%), model output suggests some shrubs can survive more than 23 fires. A 10% increase in mortality rate by size class slightly shortened the survivorship curve, while a 10% decrease in mortality rate by size class greatly elongated the curve. This approach may be applicable to other long-lived plant species with complex life histories.

  11. Species survival and scaling laws in hostile and disordered environments

    NASA Astrophysics Data System (ADS)

    Rocha, Rodrigo P.; Figueiredo, Wagner; Suweis, Samir; Maritan, Amos

    2016-10-01

    In this work we study the likelihood of survival of single-species in the context of hostile and disordered environments. Population dynamics in this environment, as modeled by the Fisher equation, is characterized by negative average growth rate, except in some random spatially distributed patches that may support life. In particular, we are interested in the phase diagram of the survival probability and in the critical size problem, i.e., the minimum patch size required for surviving in the long-time dynamics. We propose a measure for the critical patch size as being proportional to the participation ratio of the eigenvector corresponding to the largest eigenvalue of the linearized Fisher dynamics. We obtain the (extinction-survival) phase diagram and the probability distribution function (PDF) of the critical patch sizes for two topologies, namely, the one-dimensional system and the fractal Peano basin. We show that both topologies share the same qualitative features, but the fractal topology requires higher spatial fluctuations to guarantee species survival. We perform a finite-size scaling and we obtain the associated scaling exponents. In addition, we show that the PDF of the critical patch sizes has an universal shape for the 1D case in terms of the model parameters (diffusion, growth rate, etc.). In contrast, the diffusion coefficient has a drastic effect on the PDF of the critical patch sizes of the fractal Peano basin, and it does not obey the same scaling law of the 1D case.

  12. Abundance and size distribution dynamics of abyssal epibenthic megafauna in the northeast Pacific.

    PubMed

    Ruhl, Henry A

    2007-05-01

    The importance of interannual variation in deep-sea abundances is now becoming recognized. There is, however, relatively little known about what processes dominate the observed fluctuations. The abundance and size distribution of the megabenthos have been examined here using a towed camera system at a deep-sea station in the northeast Pacific (Station M) from 1989 to 2004. This 16-year study included 52 roughly seasonal transects averaging 1.2 km in length with over 35600 photographic frames analyzed. Mobile epibenthic megafauna at 4100 m depth have exhibited interannual scale changes in abundance from one to three orders of magnitude. Increases in abundance have now been significantly linked to decreases in mean body size, suggesting that accruals in abundance probably result from the recruitment of young individuals. Examinations of size-frequency histograms indicate several possible recruitment events. Shifts in size-frequency distributions were also used to make basic estimations of individual growth rates from 1 to 6 mm/month, depending on the taxon. Regional intensification in reproduction followed by recruitment within the study area could explain the majority of observed accruals in abundance. Although some adult migration is certainly probable in accounting for local variation in abundances, the slow movements of benthic life stages restrict regional migrations for most taxa. Negative competitive interactions and survivorship may explain the precipitous declines of some taxa. This and other studies have shown that abundances from protozoans to large benthic invertebrates and fishes all have undergone significant fluctuations in abundance at Station M over periods of weeks to years.

  13. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  14. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  15. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  16. Spatial patch occupancy patterns of the Lower Keys marsh rabbit

    USGS Publications Warehouse

    Eaton, Mitchell J.; Hughes, Phillip T.; Nichols, James D.; Morkill, Anne; Anderson, Chad

    2011-01-01

    Reliable estimates of presence or absence of a species can provide substantial information on management questions related to distribution and habitat use but should incorporate the probability of detection to reduce bias. We surveyed for the endangered Lower Keys marsh rabbit (Sylvilagus palustris hefneri) in habitat patches on 5 Florida Key islands, USA, to estimate occupancy and detection probabilities. We derived detection probabilities using spatial replication of plots and evaluated hypotheses that patch location (coastal or interior) and patch size influence occupancy and detection. Results demonstrate that detection probability, given rabbits were present, was <0.5 and suggest that naïve estimates (i.e., estimates without consideration of imperfect detection) of patch occupancy are negatively biased. We found that patch size and location influenced probability of occupancy but not detection. Our findings will be used by Refuge managers to evaluate population trends of Lower Keys marsh rabbits from historical data and to guide management decisions for species recovery. The sampling and analytical methods we used may be useful for researchers and managers of other endangered lagomorphs and cryptic or fossorial animals occupying diverse habitats.

  17. Velocity distributions among colliding asteroids

    NASA Technical Reports Server (NTRS)

    Bottke, William F., Jr.; Nolan, Michael C.; Greenberg, Richard; Kolvoord, Robert A.

    1994-01-01

    The probability distribution for impact velocities between two given asteroids is wide, non-Gaussian, and often contains spikes according to our new method of analysis in which each possible orbital geometry for collision is weighted according to its probability. An average value would give a good representation only if the distribution were smooth and narrow. Therefore, the complete velocity distribution we obtain for various asteroid populations differs significantly from published histograms of average velocities. For all pairs among the 682 asteroids in the main-belt with D greater than 50 km, we find that our computed velocity distribution is much wider than previously computed histograms of average velocities. In this case, the most probable impact velocity is approximately 4.4 km/sec, compared with the mean impact velocity of 5.3 km/sec. For cases of a single asteroid (e.g., Gaspra or Ida) relative to an impacting population, the distribution we find yields lower velocities than previously reported by others. The width of these velocity distributions implies that mean impact velocities must be used with caution when calculating asteroid collisional lifetimes or crater-size distributions. Since the most probable impact velocities are lower than the mean, disruption events may occur less frequently than previously estimated. However, this disruption rate may be balanced somewhat by an apparent increase in the frequency of high-velocity impacts between asteroids. These results have implications for issues such as asteroidal disruption rates, the amount/type of impact ejecta available for meteoritical delivery to the Earth, and the geology and evolution of specific asteroids like Gaspra.

  18. Bankruptcy risk model and empirical tests

    PubMed Central

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Urošević, Branko; Stanley, H. Eugene

    2010-01-01

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor—the debt-to-asset ratio R—in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes’s theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees—although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers. PMID:20937903

  19. Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.

    PubMed

    Wang, Xueying; Walton, Jay R; Parshad, Rana D

    2016-01-01

    The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.

  20. Simulation of precipitation by weather pattern and frontal analysis

    NASA Astrophysics Data System (ADS)

    Wilby, Robert

    1995-12-01

    Daily rainfall from two sites in central and southern England was stratified according to the presence or absence of weather fronts and then cross-tabulated with the prevailing Lamb Weather Type (LWT). A semi-Markov chain model was developed for simulating daily sequences of LWTs from matrices of transition probabilities between weather types for the British Isles 1970-1990. Daily and annual rainfall distributions were then simulated from the prevailing LWTs using historic conditional probabilities for precipitation occurrence and frontal frequencies. When compared with a conventional rainfall generator the frontal model produced improved estimates of the overall size distribution of daily rainfall amounts and in particular the incidence of low-frequency high-magnitude totals. Further research is required to establish the contribution of individual frontal sub-classes to daily rainfall totals and of long-term fluctuations in frontal frequencies to conditional probabilities.

  1. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  2. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  3. Mechanics-based statistics of failure risk of quasibrittle structures and size effect on safety factors.

    PubMed

    Bazant, Zdenĕk P; Pang, Sze-Dai

    2006-06-20

    In mechanical design as well as protection from various natural hazards, one must ensure an extremely low failure probability such as 10(-6). How to achieve that goal is adequately understood only for the limiting cases of brittle or ductile structures. Here we present a theory to do that for the transitional class of quasibrittle structures, having brittle constituents and characterized by nonnegligible size of material inhomogeneities. We show that the probability distribution of strength of the representative volume element of material is governed by the Maxwell-Boltzmann distribution of atomic energies and the stress dependence of activation energy barriers; that it is statistically modeled by a hierarchy of series and parallel couplings; and that it consists of a broad Gaussian core having a grafted far-left power-law tail with zero threshold and amplitude depending on temperature and load duration. With increasing structure size, the Gaussian core shrinks and Weibull tail expands according to the weakest-link model for a finite chain of representative volume elements. The model captures experimentally observed deviations of the strength distribution from Weibull distribution and of the mean strength scaling law from a power law. These deviations can be exploited for verification and calibration. The proposed theory will increase the safety of concrete structures, composite parts of aircraft or ships, microelectronic components, microelectromechanical systems, prosthetic devices, etc. It also will improve protection against hazards such as landslides, avalanches, ice breaks, and rock or soil failures.

  4. Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia

    2018-03-01

    Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.

  5. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  6. Rare events in networks with internal and external noise

    NASA Astrophysics Data System (ADS)

    Hindes, J.; Schwartz, I. B.

    2017-12-01

    We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.

  7. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    PubMed

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.

  8. Factor contribution to fire occurrence, size, and burn probability in a subtropical coniferous forest in East China.

    PubMed

    Ye, Tao; Wang, Yao; Guo, Zhixing; Li, Yijia

    2017-01-01

    The contribution of factors including fuel type, fire-weather conditions, topography and human activity to fire regime attributes (e.g. fire occurrence, size distribution and severity) has been intensively discussed. The relative importance of those factors in explaining the burn probability (BP), which is critical in terms of fire risk management, has been insufficiently addressed. Focusing on a subtropical coniferous forest with strong human disturbance in East China, our main objective was to evaluate and compare the relative importance of fuel composition, topography, and human activity for fire occurrence, size and BP. Local BP distribution was derived with stochastic fire simulation approach using detailed historical fire data (1990-2010) and forest-resource survey results, based on which our factor contribution analysis was carried out. Our results indicated that fuel composition had the greatest relative importance in explaining fire occurrence and size, but human activity explained most of the variance in BP. This implies that the influence of human activity is amplified through the process of overlapping repeated ignition and spreading events. This result emphasizes the status of strong human disturbance in local fire processes. It further confirms the need for a holistic perspective on factor contribution to fire likelihood, rather than focusing on individual fire regime attributes, for the purpose of fire risk management.

  9. Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method.

    PubMed

    Williams, Michael S; Cao, Yong; Ebel, Eric D

    2013-07-15

    Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.

  10. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  11. SAXS Combined with UV-vis Spectroscopy and QELS: Accurate Characterization of Silver Sols Synthesized in Polymer Matrices.

    PubMed

    Bulavin, Leonid; Kutsevol, Nataliya; Chumachenko, Vasyl; Soloviov, Dmytro; Kuklin, Alexander; Marynin, Andrii

    2016-12-01

    The present work demonstrates a validation of small-angle X-ray scattering (SAXS) combining with ultra violet and visible (UV-vis) spectroscopy and quasi-elastic light scattering (QELS) analysis for characterization of silver sols synthesized in polymer matrices. Polymer matrix internal structure and polymer chemical nature actually controlled the sol size characteristics. It was shown that for precise analysis of nanoparticle size distribution these techniques should be used simultaneously. All applied methods were in good agreement for the characterization of size distribution of small particles (less than 60 nm) in the sols. Some deviations of the theoretical curves from the experimental ones were observed. The most probable cause is that nanoparticles were not entirely spherical in form.

  12. A predictive approach to selecting the size of a clinical trial, based on subjective clinical opinion.

    PubMed

    Spiegelhalter, D J; Freedman, L S

    1986-01-01

    The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.

  13. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  14. Toward a comprehensive theory for the sweeping of trapped radiation by inert orbiting matter

    NASA Technical Reports Server (NTRS)

    Fillius, Walker

    1988-01-01

    There is a need to calculate loss rates when trapped Van Allen radiation encounters inert orbiting material such as planetary rings and satellites. An analytic expression for the probability of a hit in a bounce encounter is available for all cases where the absorber is spherical and the particles are gyrotropically distributed on a cylindrical flux tube. The hit probability is a function of the particle's pitch angle, the size of the absorber, and the distance between flux tube and absorber, when distances are scaled to the gyroradius of a particle moving perpendicular to the magnetic field. Using this expression, hit probabilities have been computed in drift encounters for all regimes of particle energies and absorber sizes. This technique generalizes the approach to sweeping lifetimes, and is particularly suitable for attacking the inverse problem, where one is given a sweeping signature and wants to deduce the properties of the absorber(s).

  15. Probabilistic measures of persistence and extinction in measles (meta)populations.

    PubMed

    Gunning, Christian E; Wearing, Helen J

    2013-08-01

    Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.

  16. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  17. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  18. Endemic and widespread coral reef fishes have similar mitochondrial genetic diversity.

    PubMed

    Delrieu-Trottin, Erwan; Maynard, Jeffrey; Planes, Serge

    2014-12-22

    Endemic species are frequently assumed to have lower genetic diversity than species with large distributions, even if closely related. This assumption is based on research from the terrestrial environment and theoretical evolutionary modelling. We test this assumption in the marine environment by analysing the mitochondrial genetic diversity of 33 coral reef fish species from five families sampled from Pacific Ocean archipelagos. Surprisingly, haplotype and nucleotide diversity did not differ significantly between endemic and widespread species. The probable explanation is that the effective population size of some widespread fishes locally is similar to that of many of the endemics. Connectivity across parts of the distribution of the widespread species is probably low, so widespread species can operate like endemics at the extreme or isolated parts of their range. Mitochondrial genetic diversity of many endemic reef fish species may not either limit range size or be a source of vulnerability. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  20. Calculation of the equilibrium distribution for a deleterious gene by the finite Fourier transform.

    PubMed

    Lange, K

    1982-03-01

    In a population of constant size every deleterious gene eventually attains a stochastic equilibrium between mutation and selection. The individual probabilities of this equilibrium distribution can be computed by an application of the finite Fourier transform to an appropriate branching process formula. Specific numerical examples are discussed for the autosomal dominants, Huntington's chorea and chondrodystrophy, and for the X-linked recessive, Becker's muscular dystrophy.

  1. Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy." Geophysical Research Letters 38.24 (2011). C. A. Charalambous and W. T. Pike (2013). 'Evolution of Particle Size Distributions in Fragmentation Over Time' Abstract Submitted to the AGU 46th Fall Meeting. Charalambous, C., Pike, W. T., Goetz, W., Hecht, M. H., & Staufer, U. (2011, December). 'A Digital Martian Soil based on In-Situ Data.' In AGU Fall Meeting Abstracts (Vol. 1, p. 1669). Golombek, M., & Rapp, D. (1997). 'Size-frequency distributions of rocks on Mars and Earth analog sites: Implications for future landed missions.' Journal of Geophysical Research, 102(E2), 4117-4129. Golombek, M., Huertas, A., Kipp, D., & Calef, F. (2012). 'Detection and characterization of rocks and rock size-frequency distributions at the final four Mars Science Laboratory landing sites.' Mars, 7, 1-22.

  2. Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.

    PubMed

    Boulle, A; Conchon, F; Guinebretière, R

    2006-01-01

    A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.

  3. The 1996 Leonid shower as studied with a potassium lidar: Observations and inferred meteoroid sizes

    NASA Astrophysics Data System (ADS)

    Höffner, Josef; von Zahn, Ulf; McNeil, William J.; Murad, Edmond

    1999-02-01

    We report on the observation and analysis of meteor trails that are detected by ground-based lidar tuned to the D1 fine structure line of K. The lidar is located at Kühlungsborn, Germany. The echo profiles are analyzed with a temporal resolution of about 1 s and altitude resolution of 200 m. Identification of meteor trails in the large archive of raw data is performed with help of an automated computer search code. During the peak of the Lenoid meteor shower on the morning of November 17, 1996, we observed seven meteor trails between 0245 and 0445 UT. Their mean altitude was 89.0 km. The duration of observation of individual trails ranges from 3 s to ~30 min. We model the probability of observing a meteor trail by ground-based lidar as a function of both altitude distribution and duration of the trails. These distributions depend on the mass distribution, entry velocity, and entry angle of the meteoroids, on the altitude-dependent chemical and dynamical lifetimes of the released K atom, and on the absolute detection sensitivity of our lidar experiment. From the modeling, we derive the statistical likelihood of detection of trails from meteoroids of a particular size. These bracket quite well the observed trails. The model also gives estimates of the probable size of the meteoroids based on characteristics of individual trails.

  4. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  5. Particle Size Reduction in Geophysical Granular Flows: The Role of Rock Fragmentation

    NASA Astrophysics Data System (ADS)

    Bianchi, G.; Sklar, L. S.

    2016-12-01

    Particle size reduction in geophysical granular flows is caused by abrasion and fragmentation, and can affect transport dynamics by altering the particle size distribution. While the Sternberg equation is commonly used to predict the mean abrasion rate in the fluvial environment, and can also be applied to geophysical granular flows, predicting the evolution of the particle size distribution requires a better understanding the controls on the rate of fragmentation and the size distribution of resulting particle fragments. To address this knowledge gap we are using single-particle free-fall experiments to test for the influence of particle size, impact velocity, and rock properties on fragmentation and abrasion rates. Rock types tested include granodiorite, basalt, and serpentinite. Initial particle masses and drop heights range from 20 to 1000 grams and 0.1 to 3.0 meters respectively. Preliminary results of free-fall experiments suggest that the probability of fragmentation varies as a power function of kinetic energy on impact. The resulting size distributions of rock fragments can be collapsed by normalizing by initial particle mass, and can be fit with a generalized Pareto distribution. We apply the free-fall results to understand the evolution of granodiorite particle-size distributions in granular flow experiments using rotating drums ranging in diameter from 0.2 to 4.0 meters. In the drums, we find that the rates of silt production by abrasion and gravel production by fragmentation scale with drum size. To compare these rates with free-fall results we estimate the particle impact frequency and velocity. We then use population balance equations to model the evolution of particle size distributions due to the combined effects of abrasion and fragmentation. Finally, we use the free-fall and drum experimental results to model particle size evolution in Inyo Creek, a steep, debris-flow dominated catchment, and compare model results to field measurements.

  6. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  7. A Financial Market Model Incorporating Herd Behaviour.

    PubMed

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market price of an equity index option.

  8. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  9. Scaling of strength and lifetime probability distributions of quasibrittle structures based on atomistic fracture mechanics

    PubMed Central

    Bažant, Zdeněk P.; Le, Jia-Liang; Bazant, Martin Z.

    2009-01-01

    The failure probability of engineering structures such as aircraft, bridges, dams, nuclear structures, and ships, as well as microelectronic components and medical implants, must be kept extremely low, typically <10−6. The safety factors needed to ensure it have so far been assessed empirically. For perfectly ductile and perfectly brittle structures, the empirical approach is sufficient because the cumulative distribution function (cdf) of random material strength is known and fixed. However, such an approach is insufficient for structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared with the structure size. The reason is that the strength cdf of quasibrittle structure varies from Gaussian to Weibullian as the structure size increases. In this article, a recently proposed theory for the strength cdf of quasibrittle structure is refined by deriving it from fracture mechanics of nanocracks propagating by small, activation-energy-controlled, random jumps through the atomic lattice. This refinement also provides a plausible physical justification of the power law for subcritical creep crack growth, hitherto considered empirical. The theory is further extended to predict the cdf of structural lifetime at constant load, which is shown to be size- and geometry-dependent. The size effects on structure strength and lifetime are shown to be related and the latter to be much stronger. The theory fits previously unexplained deviations of experimental strength and lifetime histograms from the Weibull distribution. Finally, a boundary layer method for numerical calculation of the cdf of structural strength and lifetime is outlined. PMID:19561294

  10. Evaluating estimators for numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.

    2007-01-01

    Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.

  11. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  12. Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite

    NASA Technical Reports Server (NTRS)

    Fisher, D. E.

    1985-01-01

    Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.

  13. Velocity and size of droplets in dense region of diesel fuel spray on transient needle opening condition

    NASA Astrophysics Data System (ADS)

    Ueki, Hironobu; Ishida, Masahiro; Sakaguchi, Daisaku

    2005-06-01

    In order to investigate the effect of transient needle opening on early stage of spray behavior, simultaneous measurements of velocity and size of droplet were conducted by a newly developed laser 2-focus velocimeter (L2F). The micro-scale probe of the L2F was consisted of two foci with a distance of 36 µm. The tested nozzle had a single hole with a diameter of 0.2 mm. The measurements of injection pressure, needle lift, and crank angle were synchronized with the spray measurement by the L2F at the position 10 mm downstream from the nozzle exit. It has been clearly shown that the velocity and size of droplet increase with needle valve opening and that the probability density distribution of droplet size can be fitted to the Nukiyama-Tanasawa distribution under the transient needle opening condition.

  14. Probabilistic approach to lysozyme crystal nucleation kinetics.

    PubMed

    Dimitrov, Ivaylo L; Hodzhaoglu, Feyzim V; Koleva, Dobryana P

    2015-09-01

    Nucleation of lysozyme crystals in quiescent solutions at a regime of progressive nucleation is investigated under an optical microscope at conditions of constant supersaturation. A method based on the stochastic nature of crystal nucleation and using discrete time sampling of small solution volumes for the presence or absence of detectable crystals is developed. It allows probabilities for crystal detection to be experimentally estimated. One hundred single samplings were used for each probability determination for 18 time intervals and six lysozyme concentrations. Fitting of a particular probability function to experimentally obtained data made possible the direct evaluation of stationary rates for lysozyme crystal nucleation, the time for growth of supernuclei to a detectable size and probability distribution of nucleation times. Obtained stationary nucleation rates were then used for the calculation of other nucleation parameters, such as the kinetic nucleation factor, nucleus size, work for nucleus formation and effective specific surface energy of the nucleus. The experimental method itself is simple and adaptable and can be used for crystal nucleation studies of arbitrary soluble substances with known solubility at particular solution conditions.

  15. Airframe integrity based on Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hurtado Cahuao, Jose Luis

    Aircraft aging has become an immense challenge in terms of ensuring the safety of the fleet while controlling life cycle costs. One of the major concerns in aircraft structures is the development of fatigue cracks in the fastener holes. A probabilistic-based method has been proposed to manage this problem. In this research, the Bayes' theorem is used to assess airframe integrity by updating generic data with airframe inspection data while such data are compiled. This research discusses the methodology developed for assessment of loss of airframe integrity due to fatigue cracking in the fastener holes of an aging platform. The methodology requires a probability density function (pdf) at the end of SAFE life. Subsequently, a crack growth regime begins. As the Bayesian analysis requires information of a prior initial crack size pdf, such a pdf is assumed and verified to be lognormally distributed. The prior distribution of crack size as cracks grow is modeled through a combined Inverse Power Law (IPL) model and lognormal relationships. The first set of inspections is used as the evidence for updating the crack size distribution at the various stages of aircraft life. Moreover, the materials used in the structural part of the aircrafts have variations in their properties due to their calibration errors and machine alignment. A Matlab routine (PCGROW) is developed to calculate the crack distribution growth through three different crack growth models. As the first step, the material properties and the initial crack size are sampled. A standard Monte Carlo simulation is employed for this sampling process. At the corresponding aircraft age, the crack observed during the inspections, is used to update the crack size distribution and proceed in time. After the updating, it is possible to estimate the probability of structural failure as a function of flight hours for a given aircraft in the future. The results show very accurate and useful values related to the reliability and integrity of airframes in aging aircrafts. Inspection data shown in this dissertation are not the actual data from known aircrafts and are only used to demonstrate the methodologies.

  16. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  17. Anthropic prediction for a large multi-jump landscape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz-Perlov, Delia, E-mail: delia@perlov.com

    2008-10-15

    The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy 'single-jump' landscape model. We showed that it is possible for the fractal prior distribution that we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single-jump size is large enough. We extend this work here by investigating a large (N{approx}10{sup 500}) toy 'multi-jump' landscape model. The jump sizesmore » range over three orders of magnitude and an overall free parameter c determines the absolute size of the jumps. We will show that for 'large' c the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small c, the distribution may not be smooth.« less

  18. Power-law tails in the distribution of order imbalance

    NASA Astrophysics Data System (ADS)

    Zhang, Ting; Gu, Gao-Feng; Xu, Hai-Chuan; Xiong, Xiong; Chen, Wei; Zhou, Wei-Xing

    2017-10-01

    We investigate the probability distribution of order imbalance calculated from the order flow data of 43 Chinese stocks traded on the Shenzhen Stock Exchange. Two definitions of order imbalance are considered based on the order number and the order size. We find that the order imbalance distributions of individual stocks have power-law tails. However, the tail index fluctuates remarkably from stock to stock. We also investigate the distributions of aggregated order imbalance of all stocks at different timescales Δt. We find no clear trend in the tail index with respect Δt. All the analyses suggest that the distributions of order imbalance are asymmetric.

  19. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  20. Factor contribution to fire occurrence, size, and burn probability in a subtropical coniferous forest in East China

    PubMed Central

    Guo, Zhixing; Li, Yijia

    2017-01-01

    The contribution of factors including fuel type, fire-weather conditions, topography and human activity to fire regime attributes (e.g. fire occurrence, size distribution and severity) has been intensively discussed. The relative importance of those factors in explaining the burn probability (BP), which is critical in terms of fire risk management, has been insufficiently addressed. Focusing on a subtropical coniferous forest with strong human disturbance in East China, our main objective was to evaluate and compare the relative importance of fuel composition, topography, and human activity for fire occurrence, size and BP. Local BP distribution was derived with stochastic fire simulation approach using detailed historical fire data (1990–2010) and forest-resource survey results, based on which our factor contribution analysis was carried out. Our results indicated that fuel composition had the greatest relative importance in explaining fire occurrence and size, but human activity explained most of the variance in BP. This implies that the influence of human activity is amplified through the process of overlapping repeated ignition and spreading events. This result emphasizes the status of strong human disturbance in local fire processes. It further confirms the need for a holistic perspective on factor contribution to fire likelihood, rather than focusing on individual fire regime attributes, for the purpose of fire risk management. PMID:28207837

  1. Species abundance distribution and population dynamics in a two-community model of neutral ecology

    NASA Astrophysics Data System (ADS)

    Vallade, M.; Houchmandzadeh, B.

    2006-11-01

    Explicit formulas for the steady-state distribution of species in two interconnected communities of arbitrary sizes are derived in the framework of Hubbell’s neutral model of biodiversity. Migrations of seeds from both communities as well as mutations in both of them are taken into account. These results generalize those previously obtained for the “island-continent” model and they allow an analysis of the influence of the ratio of the sizes of the two communities on the dominance/diversity equilibrium. Exact expressions for species abundance distributions are deduced from a master equation for the joint probability distribution of species in the two communities. Moreover, an approximate self-consistent solution is derived. It corresponds to a generalization of previous results and it proves to be accurate over a broad range of parameters. The dynamical correlations between the abundances of a species in both communities are also discussed.

  2. Microstructure as a function of the grain size distribution for packings of frictionless disks: Effects of the size span and the shape of the distribution.

    PubMed

    Estrada, Nicolas; Oquendo, W F

    2017-10-01

    This article presents a numerical study of the effects of grain size distribution (GSD) on the microstructure of two-dimensional packings of frictionless disks. The GSD is described by a power law with two parameters controlling the size span and the shape of the distribution. First, several samples are built for each combination of these parameters. Then, by means of contact dynamics simulations, the samples are densified in oedometric conditions and sheared in a simple shear configuration. The microstructure is analyzed in terms of packing fraction, local ordering, connectivity, and force transmission properties. It is shown that the microstructure is notoriously affected by both the size span and the shape of the GSD. These findings confirm recent observations regarding the size span of the GSD and extend previous works by describing the effects of the GSD shape. Specifically, we find that if the GSD shape is varied by increasing the proportion of small grains by a certain amount, it is possible to increase the packing fraction, increase coordination, and decrease the proportion of floating particles. Thus, by carefully controlling the GSD shape, it is possible to obtain systems that are denser and better connected, probably increasing the system's robustness and optimizing important strength properties such as stiffness, cohesion, and fragmentation susceptibility.

  3. Statistical distribution of the vacuum energy density in racetrack Kähler uplift models in string theory

    NASA Astrophysics Data System (ADS)

    Sumitomo, Yoske; Tye, S.-H. Henry; Wong, Sam S. C.

    2013-07-01

    We study a racetrack model in the presence of the leading α'-correction in flux compactification in Type IIB string theory, for the purpose of getting conceivable de-Sitter vacua in the large compactified volume approximation. Unlike the Kähler Uplift model studied previously, the α'-correction is more controllable for the meta-stable de-Sitter vacua in the racetrack case since the constraint on the compactified volume size is very much relaxed. We find that the vacuum energy density Λ for de-Sitter vacua approaches zero exponentially as the volume grows. We also analyze properties of the probability distribution of Λ in this class of models. As in other cases studied earlier, the probability distribution again peaks sharply at Λ = 0. We also study the Racetrack Kähler Uplift model in the Swiss-Cheese type model.

  4. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  5. A chi-square goodness-of-fit test for non-identically distributed random variables: with application to empirical Bayes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, W.J.; Cox, D.D.; Martz, H.F.

    1997-12-01

    When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less

  6. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  7. Modeling Uncertainty in Military Supply Chain Management Decisions

    DTIC Science & Technology

    2014-06-23

    a compound probability distribution (Eppen and Martin, 1988; Lau and Lau , 2003; Lin, 2008). This paper will incorporate the previously described...distribution with and is selected for the regular state and the N (0.27,0.19) is chosen for state 2. The demand in each state for a given lead...supplier receives orders of size Q from the buyer and purchases inventory from its vendors in a quantity that is an integer multiple N of the buyer’s

  8. Single and simultaneous binary mergers in Wright-Fisher genealogies.

    PubMed

    Melfi, Andrew; Viswanath, Divakar

    2018-05-01

    The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  10. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  11. Urban particle size distributions during two contrasting dust events originating from Taklimakan and Gobi Deserts.

    PubMed

    Zhao, Suping; Yu, Ye; Xia, Dunsheng; Yin, Daiying; He, Jianjun; Liu, Na; Li, Fang

    2015-12-01

    The dust origins of the two events were identified using HYSPLIT trajectory model and MODIS and CALIPSO satellite data to understand the particle size distribution during two contrasting dust events originated from Taklimakan and Gobi deserts. The supermicron particles significantly increased during the dust events. The dust event from Gobi desert affected significantly on the particles larger than 2.5 μm, while that from Taklimakan desert impacted obviously on the particles in 1.0-2.5 μm. It is found that the particle size distributions and their modal parameters such as VMD (volume median diameter) have significant difference for varying dust origins. The dust from Taklimakan desert was finer than that from Gobi desert also probably due to other influencing factors such as mixing between dust and urban emissions. Our findings illustrated the capacity of combining in situ, satellite data and trajectory model to characterize large-scale dust plumes with a variety of aerosol parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Anomalous dismeter distribution shifts estimated from FIA inventories through time

    Treesearch

    Francis A. Roesch; Paul C. Van Deusen

    2010-01-01

    In the past decade, the United States Department of Agriculture Forest Service’s Forest Inventory and Analysis Program (FIA) has replaced regionally autonomous, periodic, state-wide forest inventories using various probability proportional to tree size sampling designs with a nationally consistent annual forest inventory design utilizing systematically spaced clusters...

  13. Finite-size scaling for discontinuous nonequilibrium phase transitions

    NASA Astrophysics Data System (ADS)

    de Oliveira, Marcelo M.; da Luz, M. G. E.; Fiore, Carlos E.

    2018-06-01

    A finite-size scaling theory, originally developed only for transitions to absorbing states [Phys. Rev. E 92, 062126 (2015), 10.1103/PhysRevE.92.062126], is extended to distinct sorts of discontinuous nonequilibrium phase transitions. Expressions for quantities such as response functions, reduced cumulants, and equal area probability distributions are derived from phenomenological arguments. Irrespective of system details, all these quantities scale with the volume, establishing the dependence on size. The approach generality is illustrated through the analysis of different models. The present results are a relevant step in trying to unify the scaling behavior description of nonequilibrium transition processes.

  14. Particle size distribution of mainstream tobacco and marijuana smoke. Analysis using the electrical aerosol analyzer.

    PubMed

    Anderson, P J; Wilson, J D; Hiller, F C

    1989-07-01

    Accurate measurement of cigarette smoke particle size distribution is important for estimation of lung deposition. Most prior investigators have reported a mass median diameter (MMD) in the size range of 0.3 to 0.5 micron, with a small geometric standard deviation (GSD), indicating few ultrafine (less than 0.1 micron) particles. A few studies, however, have suggested the presence of ultrafine particles by reporting a smaller count median diameter (CMD). Part of this disparity may be due tot he inefficiency to previous sizing methods in measuring ultrafine size range, to evaluate size distribution of smoke from standard research cigarettes, commercial filter cigarettes, and from marijuana cigarettes with different delta 9-tetrahydrocannabinol contents. Four 35-cm3, 2-s puffs were generated at 60-s intervals, rapidly diluted, and passed through a charge neutralizer and into a 240-L chamber. Size distribution for six cigarettes of each type was measured, CMD and GSD were determined from a computer-generated log probability plot, and MMD was calculated. The size distribution parameters obtained were similar for all cigarettes tested, with an average CMD of 0.1 micron, a MMD of 0.38 micron, and a GSD of 2.0. The MMD found using the EAA is similar to that previously reported, but the CMD is distinctly smaller and the GSD larger, indicating the presence of many more ultrafine particles. These results may explain the disparity of CMD values found in existing data. Ultrafine particles are of toxicologic importance because their respiratory tract deposition is significantly higher than for particles 0.3 to 0.5 micron and because their large surface area facilitates adsorption and delivery of potentially toxic gases to the lung.

  15. Lunar soil: Size distribution and mineralogical constituents

    USGS Publications Warehouse

    Duke, M.B.; Woo, C.C.; Bird, M.L.; Sellers, G.A.; Finkelman, R.B.

    1970-01-01

    The lunar soil collected by Apollo 11 consists primarily of submillimeter material and is finer in grain size than soil previously recorded photographically by Surveyor experiments. The main constituents are fine-grained to glassy rocks of basaltic affinity and coherent breccia of undetermined origin. Dark glass, containing abundant nickel-iron spheres, coats many rocks, mineral, and breccia fragments. Several types of homogeneous glass occur as fragments and spheres. Colorless spheres, probably an exotic component, are abundant in the fraction finer than 20 microns.

  16. New S control chart using skewness correction method for monitoring process dispersion of skewed distributions

    NASA Astrophysics Data System (ADS)

    Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha

    2017-11-01

    Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.

  17. Design and characterization of a cough simulator.

    PubMed

    Zhang, Bo; Zhu, Chao; Ji, Zhiming; Lin, Chao-Hsin

    2017-02-23

    Expiratory droplets from human coughing have always been considered as potential carriers of pathogens, responsible for respiratory infectious disease transmission. To study the transmission of disease by human coughing, a transient repeatable cough simulator has been designed and built. Cough droplets are generated by different mechanisms, such as the breaking of mucus, condensation and high-speed atomization from different depths of the respiratory tract. These mechanisms in coughing produce droplets of different sizes, represented by a bimodal distribution of 'fine' and 'coarse' droplets. A cough simulator is hence designed to generate transient sprays with such bimodal characteristics. It consists of a pressurized gas tank, a nebulizer and an ejector, connected in series, which are controlled by computerized solenoid valves. The bimodal droplet size distribution is characterized for the coarse droplets and fine droplets, by fibrous collection and laser diffraction, respectively. The measured size distributions of coarse and fine droplets are reasonably represented by the Rosin-Rammler and log-normal distributions in probability density function, which leads to a bimodal distribution. To assess the hydrodynamic consequences of coughing including droplet vaporization and polydispersion, a Lagrangian model of droplet trajectories is established, with its ambient flow field predetermined from a computational fluid dynamics simulation.

  18. Ligament Mediated Fragmentation of Viscoelastic Liquids

    NASA Astrophysics Data System (ADS)

    Keshavarz, Bavand; Houze, Eric C.; Moore, John R.; Koerner, Michael R.; McKinley, Gareth H.

    2016-10-01

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with nmin=4 . The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  19. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  20. Ligament Mediated Fragmentation of Viscoelastic Liquids.

    PubMed

    Keshavarz, Bavand; Houze, Eric C; Moore, John R; Koerner, Michael R; McKinley, Gareth H

    2016-10-07

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with n_{min}=4. The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  1. Estimates of the Size Distribution of Meteoric Smoke Particles From Rocket-Borne Impact Probes

    NASA Astrophysics Data System (ADS)

    Antonsen, Tarjei; Havnes, Ove; Mann, Ingrid

    2017-11-01

    Ice particles populating noctilucent clouds and being responsible for polar mesospheric summer echoes exist around the mesopause in the altitude range from 80 to 90 km during polar summer. The particles are observed when temperatures around the mesopause reach a minimum, and it is presumed that they consist of water ice with inclusions of smaller mesospheric smoke particles (MSPs). This work provides estimates of the mean size distribution of MSPs through analysis of collision fragments of the ice particles populating the mesospheric dust layers. We have analyzed data from two triplets of mechanically identical rocket probes, MUltiple Dust Detector (MUDD), which are Faraday bucket detectors with impact grids that partly fragments incoming ice particles. The MUDD probes were launched from Andøya Space Center (69°17'N, 16°1'E) on two payloads during the MAXIDUSTY campaign on 30 June and 8 July 2016, respectively. Our analysis shows that it is unlikely that ice particles produce significant current to the detector, and that MSPs dominate the recorded current. The size distributions obtained from these currents, which reflect the MSP sizes, are described by inverse power laws with exponents of k˜ [3.3 ± 0.7, 3.7 ± 0.5] and k˜ [3.6 ± 0.8, 4.4 ± 0.3] for the respective flights. We derived two k values for each flight depending on whether the charging probability is proportional to area or volume of fragments. We also confirm that MSPs are probably abundant inside mesospheric ice particles larger than a few nanometers, and the volume filling factor can be a few percent for reasonable assumptions of particle properties.

  2. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  3. Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements

    NASA Astrophysics Data System (ADS)

    Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.

    2013-12-01

    An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.

  4. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    NASA Astrophysics Data System (ADS)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  5. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  6. Stochastic Analysis of Wind Energy for Wind Pump Irrigation in Coastal Andhra Pradesh, India

    NASA Astrophysics Data System (ADS)

    Raju, M. M.; Kumar, A.; Bisht, D.; Rao, D. B.

    2014-09-01

    The rapid escalation in the prices of oil and gas as well as increasing demand for energy has attracted the attention of scientists and researchers to explore the possibility of generating and utilizing the alternative and renewable sources of wind energy in the long coastal belt of India with considerable wind energy resources. A detailed analysis of wind potential is a prerequisite to harvest the wind energy resources efficiently. Keeping this in view, the present study was undertaken to analyze the wind energy potential to assess feasibility of the wind-pump operated irrigation system in the coastal region of Andhra Pradesh, India, where high ground water table conditions are available. The stochastic analysis of wind speed data were tested to fit a probability distribution, which describes the wind energy potential in the region. The normal and Weibull probability distributions were tested; and on the basis of Chi square test, the Weibull distribution gave better results. Hence, it was concluded that the Weibull probability distribution may be used to stochastically describe the annual wind speed data of coastal Andhra Pradesh with better accuracy. The size as well as the complete irrigation system with mass curve analysis was determined to satisfy various daily irrigation demands at different risk levels.

  7. The role of community structure on the nature of explosive synchronization.

    PubMed

    Lotfi, Nastaran; Rodrigues, Francisco A; Darooneh, Amir Hossein

    2018-03-01

    In this paper, we analyze explosive synchronization in networks with a community structure. The results of our study indicate that the mesoscopic structure of the networks could affect the synchronization of coupled oscillators. With the variation of three parameters, the degree probability distribution exponent, the community size probability distribution exponent, and the mixing parameter, we could have a fast or slow phase transition. Besides, in some cases, we could have communities which are synchronized inside but not with other communities and vice versa. We also show that there is a limit in these mesoscopic structures which suppresses the transition from the second-order phase transition and results in explosive synchronization. This could be considered as a tuning parameter changing the transition of the system from the second order to the first order.

  8. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  9. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    PubMed

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  10. Methods for obtaining true particle size distributions from cross section measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, Kristina Alyse

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a planemore » section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.« less

  11. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  12. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  13. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  14. Universal rule for the symmetric division of plant cells

    PubMed Central

    Besson, Sébastien; Dumais, Jacques

    2011-01-01

    The division of eukaryotic cells involves the assembly of complex cytoskeletal structures to exert the forces required for chromosome segregation and cytokinesis. In plants, empirical evidence suggests that tensional forces within the cytoskeleton cause cells to divide along the plane that minimizes the surface area of the cell plate (Errera’s rule) while creating daughter cells of equal size. However, exceptions to Errera’s rule cast doubt on whether a broadly applicable rule can be formulated for plant cell division. Here, we show that the selection of the plane of division involves a competition between alternative configurations whose geometries represent local area minima. We find that the probability of observing a particular division configuration increases inversely with its relative area according to an exponential probability distribution known as the Gibbs measure. Moreover, a comparison across land plants and their most recent algal ancestors confirms that the probability distribution is widely conserved and independent of cell shape and size. Using a maximum entropy formulation, we show that this empirical division rule is predicted by the dynamics of the tense cytoskeletal elements that lead to the positioning of the preprophase band. Based on the fact that the division plane is selected from the sole interaction of the cytoskeleton with cell shape, we posit that the new rule represents the default mechanism for plant cell division when internal or external cues are absent. PMID:21383128

  15. Changes in the frequency distribution of energy deposited in short pathlengths as a function of energy degradation of the primary beam.

    NASA Technical Reports Server (NTRS)

    Baily, N. A.; Steigerwalt, J. E.; Hilbert, J. W.

    1972-01-01

    The frequency distributions of event size in the deposition of energy over small pathlengths have been measured after penetration of 44.3 MeV protons through various thicknesses of tissue-equivalent material. Results show that particle energy straggling of an initially monoenergetic proton beam after passage through an absorber causes the frequency distributions of energy deposited in short pathlengths of low atomic number materials to remain broad. In all cases investigated, the ratio of the most probable to the average energy losses has been significantly less than unity.

  16. Acid Hydrolysis and Molecular Density of Phytoglycogen and Liver Glycogen Helps Understand the Bonding in Glycogen α (Composite) Particles

    PubMed Central

    Powell, Prudence O.; Sullivan, Mitchell A.; Sheehy, Joshua J.; Schulz, Benjamin L.; Warren, Frederick J.; Gilbert, Robert G.

    2015-01-01

    Phytoglycogen (from certain mutant plants) and animal glycogen are highly branched glucose polymers with similarities in structural features and molecular size range. Both appear to form composite α particles from smaller β particles. The molecular size distribution of liver glycogen is bimodal, with distinct α and β components, while that of phytoglycogen is monomodal. This study aims to enhance our understanding of the nature of the link between liver-glycogen β particles resulting in the formation of large α particles. It examines the time evolution of the size distribution of these molecules during acid hydrolysis, and the size dependence of the molecular density of both glucans. The monomodal distribution of phytoglycogen decreases uniformly in time with hydrolysis, while with glycogen, the large particles degrade significantly more quickly. The size dependence of the molecular density shows qualitatively different shapes for these two types of molecules. The data, combined with a quantitative model for the evolution of the distribution during degradation, suggest that the bonding between β into α particles is different between phytoglycogen and liver glycogen, with the formation of a glycosidic linkage for phytoglycogen and a covalent or strong non-covalent linkage, most probably involving a protein, for glycogen as most likely. This finding is of importance for diabetes, where α-particle structure is impaired. PMID:25799321

  17. Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.

    2012-01-01

    A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.

  18. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  19. Andean Condor (Vultur gryphus) in Ecuador: Geographic Distribution, Population Size and Extinction Risk.

    PubMed

    Naveda-Rodríguez, Adrián; Vargas, Félix Hernán; Kohn, Sebastián; Zapata-Ríos, Galo

    2016-01-01

    The Andean Condor (Vultur gryphus) in Ecuador is classified as Critically Endangered. Before 2015, standardized and systematic estimates of geographic distribution, population size and structure were not available for this species, hampering the assessment of its current status and hindering the design and implementation of effective conservation actions. In this study, we performed the first quantitative assessment of geographic distribution, population size and population viability of Andean Condor in Ecuador. We used a methodological approach that included an ecological niche model to study geographic distribution, a simultaneous survey of 70 roosting sites to estimate population size and a population viability analysis (PVA) for the next 100 years. Geographic distribution in the form of extent of occurrence was 49 725 km2. During a two-day census, 93 Andean Condors were recorded and a population of 94 to 102 individuals was estimated. In this population, adult-to-immature ratio was 1:0.5. In the modeled PVA scenarios, the probability of extinction, mean time to extinction and minimum population size varied from zero to 100%, 63 years and 193 individuals, respectively. Habitat loss is the greatest threat to the conservation of Andean Condor populations in Ecuador. Population size reduction in scenarios that included habitat loss began within the first 15 years of this threat. Population reinforcement had no effects on the recovery of Andean Condor populations given the current status of the species in Ecuador. The population size estimate presented in this study is the lower than those reported previously in other countries where the species occur. The inferences derived from the population viability analysis have implications for Condor management in Ecuador. This study highlights the need to redirect efforts from captive breeding and population reinforcement to habitat conservation.

  20. Andean Condor (Vultur gryphus) in Ecuador: Geographic Distribution, Population Size and Extinction Risk

    PubMed Central

    Naveda-Rodríguez, Adrián; Vargas, Félix Hernán; Kohn, Sebastián; Zapata-Ríos, Galo

    2016-01-01

    The Andean Condor (Vultur gryphus) in Ecuador is classified as Critically Endangered. Before 2015, standardized and systematic estimates of geographic distribution, population size and structure were not available for this species, hampering the assessment of its current status and hindering the design and implementation of effective conservation actions. In this study, we performed the first quantitative assessment of geographic distribution, population size and population viability of Andean Condor in Ecuador. We used a methodological approach that included an ecological niche model to study geographic distribution, a simultaneous survey of 70 roosting sites to estimate population size and a population viability analysis (PVA) for the next 100 years. Geographic distribution in the form of extent of occurrence was 49 725 km2. During a two-day census, 93 Andean Condors were recorded and a population of 94 to 102 individuals was estimated. In this population, adult-to-immature ratio was 1:0.5. In the modeled PVA scenarios, the probability of extinction, mean time to extinction and minimum population size varied from zero to 100%, 63 years and 193 individuals, respectively. Habitat loss is the greatest threat to the conservation of Andean Condor populations in Ecuador. Population size reduction in scenarios that included habitat loss began within the first 15 years of this threat. Population reinforcement had no effects on the recovery of Andean Condor populations given the current status of the species in Ecuador. The population size estimate presented in this study is the lower than those reported previously in other countries where the species occur. The inferences derived from the population viability analysis have implications for Condor management in Ecuador. This study highlights the need to redirect efforts from captive breeding and population reinforcement to habitat conservation. PMID:26986004

  1. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  2. Application of a Probalistic Sizing Methodology for Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit

    2012-07-01

    Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.

  3. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    PubMed

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  4. Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states

    NASA Astrophysics Data System (ADS)

    de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.

    2015-12-01

    Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.

  5. Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process.

    PubMed

    Jędrak, Jakub; Ochab-Marcinek, Anna

    2016-09-01

    We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the nth cumulant of the protein number distribution depends on the nth moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.

  6. Electron impact ionization of size selected hydrogen clusters (H2)N: ion fragment and neutral size distributions.

    PubMed

    Kornilov, Oleg; Toennies, J Peter

    2008-05-21

    Clusters consisting of normal H2 molecules, produced in a free jet expansion, are size selected by diffraction from a transmission nanograting prior to electron impact ionization. For each neutral cluster (H2)(N) (N=2-40), the relative intensities of the ion fragments Hn+ are measured with a mass spectrometer. H3+ is found to be the most abundant fragment up to N=17. With a further increase in N, the abundances of H3+, H5+, H7+, and H9+ first increase and, after passing through a maximum, approach each other. At N=40, they are about the same and more than a factor of 2 and 3 larger than for H11+ and H13+, respectively. For a given neutral cluster size, the intensities of the ion fragments follow a Poisson distribution. The fragmentation probabilities are used to determine the neutral cluster size distribution produced in the expansion at a source temperature of 30.1 K and a source pressure of 1.50 bar. The distribution shows no clear evidence of a magic number N=13 as predicted by theory and found in experiments with pure para-H2 clusters. The ion fragment distributions are also used to extract information on the internal energy distribution of the H3+ ions produced in the reaction H2+ + H2-->H3+ +H, which is initiated upon ionization of the cluster. The internal energy is assumed to be rapidly equilibrated and to determine the number of molecules subsequently evaporated. The internal energy distribution found in this way is in good agreement with data obtained in an earlier independent merged beam scattering experiment.

  7. A Financial Market Model Incorporating Herd Behaviour

    PubMed Central

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market price of an equity index option. PMID:27007236

  8. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  10. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  11. Mechanical analysis of the dry stone walls built by the Incas

    NASA Astrophysics Data System (ADS)

    Castro, Jaime; Vallejo, Luis E.; Estrada, Nicolas

    2017-06-01

    In this paper, the retaining walls in the agricultural terraces built by the Incas are analyzed from a mechanical point of view. In order to do so, ten different walls from the Lower Agricultural Sector of Machu Picchu, Perú, were selected using images from Google Street View and Google Earth Pro. Then, these walls were digitalized and their mechanical stability was evaluated. Firstly, it was found that these retaining walls are characterized by two distinctive features: disorder and a block size distribution with a large size span, i.e., the particle size varies from blocks that can be carried by one person to large blocks weighing several tons. Secondly, it was found that, thanks to the large span of the block size distribution, the factor of safety of the Inca retaining walls is remarkably close to those that are recommended in modern geotechnical design standards. This suggests that these structures were not only functional but also highly optimized, probably as a result of a careful trial and error procedure.

  12. Orbital debris and meteoroids: Results from retrieved spacecraft surfaces

    NASA Astrophysics Data System (ADS)

    Mandeville, J. C.

    1993-08-01

    Near-Earth space contains natural and man-made particles, whose size distribution ranges from submicron sized particles to cm sized objects. This environment causes a grave threat to space missions, mainly for future manned or long duration missions. Several experiments devoted to the study of this environment have been recently retrieved from space. Among them several were located on the NASA Long Duration Exposure Facility (LDEF) and on the Russian MIR Space Station. Evaluation of hypervelocity impact features gives valuable information on size distribution of small dust particles present in low Earth orbit. Chemical identification of projectile remnants is possible in many instances, thus allowing a discrimination between extraterrestrial particles and man-made orbital debris. A preliminary comparison of flight data with current modeling of meteoroids and space debris shows a fair agreement. However impact of particles identified as space debris on the trailing side of LDEF, not predicted by the models, could be the result of space debris in highly excentric orbits, probably associated with GTO objects.

  13. Behavior of suspended particles in the Changjiang Estuary: Size distribution and trace metal contamination.

    PubMed

    Yao, Qingzhen; Wang, Xiaojing; Jian, Huimin; Chen, Hongtao; Yu, Zhigang

    2016-02-15

    Suspended particulate matter (SPM) samples were collected along a salinity gradient in the Changjiang Estuary in June 2011. A custom-built water elutriation apparatus was used to separate the suspended sediments into five size fractions. The results indicated that Cr and Pb originated from natural weathering processes, whereas Cu, Zn, and Cd originated from other sources. The distribution of most trace metals in different particle sizes increased with decreasing particle size. The contents of Fe/Mn and organic matter were confirmed to play an important role in increasing the level of heavy metal contents. The Cu, Pb, Zn, and Cd contents varied significantly with increasing salinity in the medium-low salinity region, thus indicating the release of Cu, Pb, Zn, and Cd particles. Thus, the transfer of polluted fine particles into the open sea is probably accompanied by release of pollutants into the dissolved compartment, thereby amplifying the potential harmful effects to marine organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Gravity and count probabilities in an expanding universe

    NASA Technical Reports Server (NTRS)

    Bouchet, Francois R.; Hernquist, Lars

    1992-01-01

    The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.

  15. Modeling the distribution of colonial species to improve estimation of plankton concentration in ballast water

    NASA Astrophysics Data System (ADS)

    Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah

    2018-03-01

    The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and < 50 μm, collected from five different depths of a ballast tank of a commercial ship in three independent surveys. We show that the data fit to the negative binomial and the hierarchical probability models equally well, with both models performing better than the Poisson model at the scale of our sampling. The hierarchical probability model, which accounts for both the individuals and the colonies in a sample, reduces the uncertainty associated with the concentration estimation, and improves the power of rejecting the decision on ship's compliance when a ship does not truly comply with the standard. We show examples of how to test ballast water compliance using the above models.

  16. Heavy-tailed distribution of cyber-risks

    NASA Astrophysics Data System (ADS)

    Maillart, T.; Sornette, D.

    2010-06-01

    With the development of the Internet, new kinds of massive epidemics, distributed attacks, virtual conflicts and criminality have emerged. We present a study of some striking statistical properties of cyber-risks that quantify the distribution and time evolution of information risks on the Internet, to understand their mechanisms, and create opportunities to mitigate, control, predict and insure them at a global scale. First, we report an exceptionnaly stable power-law tail distribution of personal identity losses per event, Pr(ID loss ≥ V) ~ 1/Vb, with b = 0.7 ± 0.1. This result is robust against a surprising strong non-stationary growth of ID losses culminating in July 2006 followed by a more stationary phase. Moreover, this distribution is identical for different types and sizes of targeted organizations. Since b < 1, the cumulative number of all losses over all events up to time t increases faster-than-linear with time according to ≃ t1/b, suggesting that privacy, characterized by personal identities, is necessarily becoming more and more insecure. We also show the existence of a size effect, such that the largest possible ID losses per event grow faster-than-linearly as ~S1.3 with the organization size S. The small value b ≃ 0.7 of the power law distribution of ID losses is explained by the interplay between Zipf’s law and the size effect. We also infer that compromised entities exhibit basically the same probability to incur a small or large loss.

  17. Greedy algorithms and Zipf laws

    NASA Astrophysics Data System (ADS)

    Moran, José; Bouchaud, Jean-Philippe

    2018-04-01

    We consider a simple model of firm/city/etc growth based on a multi-item criterion: whenever entity B fares better than entity A on a subset of M items out of K, the agent originally in A moves to B. We solve the model analytically in the cases K  =  1 and . The resulting stationary distribution of sizes is generically a Zipf-law provided M  >  K/2. When , no selection occurs and the size distribution remains thin-tailed. In the special case M  =  K, one needs to regularize the problem by introducing a small ‘default’ probability ϕ. We find that the stationary distribution has a power-law tail that becomes a Zipf-law when . The approach to the stationary state can also be characterized, with strong similarities with a simple ‘aging’ model considered by Barrat and Mézard.

  18. Possible ergodic-nonergodic regions in the quantum Sherrington-Kirkpatrick spin glass model and quantum annealing

    NASA Astrophysics Data System (ADS)

    Mukherjee, Sudip; Rajak, Atanu; Chakrabarti, Bikas K.

    2018-02-01

    We explore the behavior of the order parameter distribution of the quantum Sherrington-Kirkpatrick model in the spin glass phase using Monte Carlo technique for the effective Suzuki-Trotter Hamiltonian at finite temperatures and that at zero temperature obtained using the exact diagonalization method. Our numerical results indicate the existence of a low- but finite-temperature quantum-fluctuation-dominated ergodic region along with the classical fluctuation-dominated high-temperature nonergodic region in the spin glass phase of the model. In the ergodic region, the order parameter distribution gets narrower around the most probable value of the order parameter as the system size increases. In the other region, the Parisi order distribution function has nonvanishing value everywhere in the thermodynamic limit, indicating nonergodicity. We also show that the average annealing time for convergence (to a low-energy level of the model, within a small error range) becomes system size independent for annealing down through the (quantum-fluctuation-dominated) ergodic region. It becomes strongly system size dependent for annealing through the nonergodic region. Possible finite-size scaling-type behavior for the extent of the ergodic region is also addressed.

  19. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  20. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  1. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  2. Universality in survivor distributions: Characterizing the winners of competitive dynamics

    NASA Astrophysics Data System (ADS)

    Luck, J. M.; Mehta, A.

    2015-11-01

    We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.

  3. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  4. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  5. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    August, Tyler M.; Wiegert, Paul A., E-mail: tx_august@laurentian.ca

    The size distribution of the asteroid belt is examined with 16956 main belt asteroids detected in data taken from the Canada-France-Hawaii Telescope Legacy Survey in two filters (g' and r'). The cumulative H (absolute magnitude) distribution is examined in both filters, and both match well to simple power laws down to H = 17, with slopes in rough agreement with those reported the literature. This implies that disruptive collisions between asteroids are gravitationally dominated down to at least this size, and probably sub-kilometer scales. The slopes of these distributions appear shallower in the outer belt than the inner belt, andmore » the g' distributions appear slightly steeper than the r'. The slope shallowing in the outer belt may reflect a real compositional difference: the inner asteroid belt has been suggested to consist mostly of stony and/or metallic S-type asteroids, whereas carbonaceous C-types are thought to be more prevalent further from the Sun. No waves are seen in the size distribution above H = 15. Since waves are expected to be produced at the transition from gravitationally-dominated to internal strength-dominated collisions, their absence here may imply that the transition occurs at sub-kilometer scales, much smaller than the H = 17 (diameter {approx} 1.6 km) cutoff of this study.« less

  7. Simple size-controlled synthesis of Au nanoparticles and their size-dependent catalytic activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suchomel, Petr; Kvitek, Libor; Prucek, Robert

    The controlled preparation of Au nanoparticles (NPs) in the size range of 6 to 22 nm is explored in this study. The Au NPs were prepared by the reduction of tetrachloroauric acid using maltose in the presence of nonionic surfactant Tween 80 at various concentrations to control the size of the resulting Au NPs. With increasing concentration of Tween 80 a decrease in the size of produced Au NPs was observed, along with a significant decrease in their size distribution. The size-dependent catalytic activity of the synthesized Au NPs was tested in the reduction of 4-nitrophenol with sodium borohydride, resultingmore » in increasing catalytic activity with decreasing size of the prepared nanoparticles. Eley-Rideal catalytic mechanism emerges as the more probable, in contrary to the Langmuir-Hinshelwood mechanism reported for other noble metal nanocatalysts.« less

  8. Simple size-controlled synthesis of Au nanoparticles and their size-dependent catalytic activity

    DOE PAGES

    Suchomel, Petr; Kvitek, Libor; Prucek, Robert; ...

    2018-03-15

    The controlled preparation of Au nanoparticles (NPs) in the size range of 6 to 22 nm is explored in this study. The Au NPs were prepared by the reduction of tetrachloroauric acid using maltose in the presence of nonionic surfactant Tween 80 at various concentrations to control the size of the resulting Au NPs. With increasing concentration of Tween 80 a decrease in the size of produced Au NPs was observed, along with a significant decrease in their size distribution. The size-dependent catalytic activity of the synthesized Au NPs was tested in the reduction of 4-nitrophenol with sodium borohydride, resultingmore » in increasing catalytic activity with decreasing size of the prepared nanoparticles. Eley-Rideal catalytic mechanism emerges as the more probable, in contrary to the Langmuir-Hinshelwood mechanism reported for other noble metal nanocatalysts.« less

  9. Modeling of Disordered Binary Alloys Under Thermal Forcing: Effect of Nanocrystallite Dissociation on Thermal Expansion of AuCu3

    NASA Astrophysics Data System (ADS)

    Kim, Y. W.; Cress, R. P.

    2016-11-01

    Disordered binary alloys are modeled as a randomly close-packed assembly of nanocrystallites intermixed with randomly positioned atoms, i.e., glassy-state matter. The nanocrystallite size distribution is measured in a simulated macroscopic medium in two dimensions. We have also defined, and measured, the degree of crystallinity as the probability of a particle being a member of nanocrystallites. Both the distribution function and the degree of crystallinity are found to be determined by alloy composition. When heated, the nanocrystallites become smaller in size due to increasing thermal fluctuation. We have modeled this phenomenon as a case of thermal dissociation by means of the law of mass action. The crystallite size distribution function is computed for AuCu3 as a function of temperature by solving some 12 000 coupled algebraic equations for the alloy. The results show that linear thermal expansion of the specimen has contributions from the temperature dependence of the degree of crystallinity, in addition to respective thermal expansions of the nanocrystallites and glassy-state matter.

  10. Assessing hail risk for a building portfolio by generating stochastic events

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie

    2015-04-01

    Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.

  11. A simulation of probabilistic wildfire risk components for the continental United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short

    2011-01-01

    This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...

  12. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  13. Influence of habitat quality, population size, patch size, and connectivity on patch-occupancy dynamics of the middle spotted woodpecker.

    PubMed

    Robles, Hugo; Ciudad, Carlos

    2012-04-01

    Despite extensive research on the effects of habitat fragmentation, the ecological mechanisms underlying colonization and extinction processes are poorly known, but knowledge of these mechanisms is essential to understanding the distribution and persistence of populations in fragmented habitats. We examined these mechanisms through multiseason occupancy models that elucidated patch-occupancy dynamics of Middle Spotted Woodpeckers (Dendrocopos medius) in northwestern Spain. The number of occupied patches was relatively stable from 2000 to 2010 (15-24% of 101 patches occupied every year) because extinction was balanced by recolonization. Larger and higher quality patches (i.e., higher density of oaks >37 cm dbh [diameter at breast height]) were more likely to be occupied. Habitat quality (i.e., density of large oaks) explained more variation in patch colonization and extinction than did patch size and connectivity, which were both weakly associated with probabilities of turnover. Patches of higher quality were more likely to be colonized than patches of lower quality. Populations in high-quality patches were less likely to become extinct. In addition, extinction in a patch was strongly associated with local population size but not with patch size, which means the latter may not be a good surrogate of population size in assessments of extinction probability. Our results suggest that habitat quality may be a primary driver of patch-occupancy dynamics and may increase the accuracy of models of population survival. We encourage comparisons of competing models that assess occupancy, colonization, and extinction probabilities in a single analytical framework (e.g., dynamic occupancy models) so as to shed light on the association of habitat quality and patch geometry with colonization and extinction processes in different settings and species. ©2012 Society for Conservation Biology.

  14. Theoretical analysis of the influence of aerosol size distribution and physical activity on particle deposition pattern in human lungs.

    PubMed

    Voutilainen, Arto; Kaipio, Jari P; Pekkanen, Juha; Timonen, Kirsi L; Ruuskanen, Juhani

    2004-01-01

    A theoretical comparison of modeled particle depositions in the human respiratory tract was performed by taking into account different particle number and mass size distributions and physical activity in an urban environment. Urban-air data on particulate concentrations in the size range 10 nm-10 microm were used to estimate the hourly average particle number and mass size distribution functions. The functions were then combined with the deposition probability functions obtained from a computerized ICRP 66 deposition model of the International Commission on Radiological Protection to calculate the numbers and masses of particles deposited in five regions of the respiratory tract of a male adult. The man's physical activity and minute ventilation during the day were taken into account in the calculations. Two different mass and number size distributions of aerosol particles with equal (computed) <10 microm particle mass concentrations gave clearly different deposition patterns in the central and peripheral regions of the human respiratory tract. The deposited particle numbers and masses were much higher during the day (0700-1900) than during the night (1900-0700) because an increase in physical activity and ventilation were temporally associated with highly increased traffic-derived particles in urban outdoor air. In future analyses of the short-term associations between particulate air pollution and health, it would not only be important to take into account the outdoor-to-indoor penetration of different particle sizes and human time-activity patterns, but also actual lung deposition patterns and physical activity in significant microenvironments.

  15. Dynamics of social contagions with local trend imitation.

    PubMed

    Zhu, Xuzhen; Wang, Wei; Cai, Shimin; Stanley, H Eugene

    2018-05-09

    Research on social contagion dynamics has not yet included a theoretical analysis of the ubiquitous local trend imitation (LTI) characteristic. We propose a social contagion model with a tent-like adoption probability to investigate the effect of this LTI characteristic on behavior spreading. We also propose a generalized edge-based compartmental theory to describe the proposed model. Through extensive numerical simulations and theoretical analyses, we find a crossover in the phase transition: when the LTI capacity is strong, the growth of the final adoption size exhibits a second-order phase transition. When the LTI capacity is weak, we see a first-order phase transition. For a given behavioral information transmission probability, there is an optimal LTI capacity that maximizes the final adoption size. Finally we find that the above phenomena are not qualitatively affected by the heterogeneous degree distribution. Our suggested theoretical predictions agree with the simulation results.

  16. Nuclear energy release from fragmentation

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Souza, S. R.; Tsang, M. B.; Zhang, Feng-Shou

    2016-08-01

    It is well known that binary fission occurs with positive energy gain. In this article we examine the energetics of splitting uranium and thorium isotopes into various numbers of fragments (from two to eight) with nearly equal size. We find that the energy released by splitting 230,232Th and 235,238U into three equal size fragments is largest. The statistical multifragmentation model (SMM) is applied to calculate the probability of different breakup channels for excited nuclei. By weighing the probability distributions of fragment multiplicity at different excitation energies, we find the peaks of energy release for 230,232Th and 235,238U are around 0.7-0.75 MeV/u at excitation energy between 1.2 and 2 MeV/u in the primary breakup process. Taking into account the secondary de-excitation processes of primary fragments with the GEMINI code, these energy peaks fall to about 0.45 MeV/u.

  17. Near-infrared scattering as a dust diagnostic

    NASA Astrophysics Data System (ADS)

    Saajasto, Mika; Juvela, Mika; Malinen, Johanna

    2018-06-01

    Context. Regarding the evolution of dust grains from diffuse regions of space to dense molecular cloud cores, many questions remain open. Scattering at near-infrared wavelengths, or "cloudshine", can provide information on cloud structure, dust properties, and the radiation field that is complementary to mid-infrared "coreshine" and observations of dust emission at longer wavelengths. Aims: We examine the possibility of using near-infrared scattering to constrain the local radiation field and the dust properties, the scattering and absorption efficiency, the size distribution of the grains, and the maximum grain size. Methods: We use radiative transfer modelling to examine the constraints provided by the J, H, and K bands in combination with mid-infrared surface brightness at 3.6 μm. We use spherical one-dimensional and elliptical three-dimensional cloud models to study the observable effects of different grain size distributions with varying absorption and scattering properties. As an example, we analyse observations of a molecular cloud in Taurus, TMC-1N. Results: The observed surface brightness ratios of the bands change when the dust properties are changed. However, even a change of ±10% in the surface brightness of one band changes the estimated power-law exponent of the size distribution γ by up to 30% and the estimated strength of the radiation field KISRF by up to 60%. The maximum grain size Amax and γ are always strongly anti-correlated. For example, overestimating the surface brightness by 10% changes the estimated radiation field strength by 20% and the exponent of the size distribution by 15%. The analysis of our synthetic observations indicates that the relative uncertainty of the parameter distributions are on average Amax, γ 25%, and the deviation between the estimated and correct values ΔQ < 15%. For the TMC-1N observations, a maximum grain size Amax > 1.5μm and a size distribution with γ > 4.0 have high probability. The mass weighted average grain size is ⟨am⟩ = 0.113μm. Conclusions: We show that scattered infrared light can be used to derive meaningful limits for the dust parameters. However, errors in the surface brightness data can result in considerable uncertainties on the derived parameters.

  18. LN2 spray droplet size measurement via ensemble diffraction technique

    NASA Technical Reports Server (NTRS)

    Saiyed, N. H.; Jurns, J.; Chato, David J.

    1991-01-01

    The size of subcooled liquified nitrogen droplets are measured with a 5 mW He-Ne laser as a function of pressure difference (delta P) across flat spray and full cone pressure atomizing nozzles. For delta P's of 3 to 30 psid, the spray sauter mean diameter (SMD) ranged between 250 to 50 microns. The pressure range tested is representative of those expected during cryogenic fluid transfer operations in space. The droplet sizes from the flat spray nozzles were greater than those from the full cone nozzle. A power function of the form, SMD varies as delta P(exp a), describes the spray SMD as a function of the delta P very well. The values of a were -0.36 for the flat spray and -0.87 for the full cone. The reduced dependence of the flat spray SMD on the delta P was probably because of: (1) the absence of a swirler that generates turbulence within the nozzle to enhance atomization, and (2) a possible increase in shearing stress resulting from the delayed atomization due to the absence of turbulence. The nitrogen quality, up to 1.5 percent is based on isenthalpic expansion, did not have a distinct and measurable effect on the spray SMD. Both bimodal and monomodal droplet size population distributions were measured. In the bimodal distribution, the frequency of the first mode was much greater than the frequency of the second mode. Also, the frequency of the second mode was low enough such that a monomodal approximation probably would give reasonable results.

  19. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  20. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  1. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  2. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  3. Development of a methodology to evaluate material accountability in pyroprocess

    NASA Astrophysics Data System (ADS)

    Woo, Seungmin

    This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).

  4. Impact of physical properties on ozone removal by several porous materials.

    PubMed

    Gall, Elliott T; Corsi, Richard L; Siegel, Jeffrey A

    2014-04-01

    Models of reactive uptake of ozone in indoor environments generally describe materials through aerial (horizontal) projections of surface area, a potentially limiting assumption for porous materials. We investigated the effect of changing porosity/pore size, material thickness, and chamber fluid mechanic conditions on the reactive uptake of ozone to five materials: two cellulose filter papers, two cementitious materials, and an activated carbon cloth. Results include (1) material porosity and pore size distributions, (2) effective diffusion coefficients for ozone in materials, and (3) material-ozone deposition velocities and reaction probabilities. At small length scales (0.02-0.16 cm) increasing thickness caused increases in estimated reaction probabilities from 1 × 10(-6) to 5 × 10(-6) for one type of filter paper and from 1 × 10(-6) to 1 × 10(-5) for a second type of filter paper, an effect not observed for materials tested at larger thicknesses. For high porosity materials, increasing chamber transport-limited deposition velocities resulted in increases in reaction probabilities by factors of 1.4-2.0. The impact of physical properties and transport effects on values of the Thiele modulus, ranging across all materials from 0.03 to 13, is discussed in terms of the challenges in estimating reaction probabilities to porous materials in scenarios relevant to indoor environments.

  5. Geomorphological and sedimentary evidence of probable glaciation in the Jizerské hory Mountains, Central Europe

    NASA Astrophysics Data System (ADS)

    Engel, Zbyněk; Křížek, Marek; Kasprzak, Marek; Traczyk, Andrzej; Hložek, Martin; Krbcová, Klára

    2017-03-01

    The Jizerské hory Mountains in the Czech Republic have traditionally been considered to be a highland that lay beyond the limits of Quaternary glaciations. Recent work on cirque-like valley heads in the central part of the range has shown that niche glaciers could form during the Quaternary. Here we report geomorphological and sedimentary evidence for a small glacier in the Pytlácká jáma Hollow that represents one of the most-enclosed valley heads within the range. Shape and size characteristics of this landform indicate that the hollow is a glacial cirque at a degraded stage of development. Boulder accumulations at the downslope side of the hollow probably represent a relic of terminal moraines, and the grain size distribution of clasts together with micromorphology of quartz grains from the hollow indicate the glacial environment of a small glacier. This glacier represents the lowermost located such system in central Europe and provides evidence for the presence of niche or small cirque glaciers probably during pre-Weichselian glacial periods. The glaciation limit (1000 m asl) and paleo-ELA (900 m asl) proposed for the Jizerské hory Mountains implies that central European ranges lower than 1100 m asl were probably glaciated during the Quaternary.

  6. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  7. Spatial event cluster detection using an approximate normal distribution.

    PubMed

    Torabi, Mahmoud; Rosychuk, Rhonda J

    2008-12-12

    In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.

  8. Wind-driven upwelling effects on cephalopod paralarvae: Octopus vulgaris and Loliginidae off the Galician coast (NE Atlantic)

    NASA Astrophysics Data System (ADS)

    Otero, Jaime; Álvarez-Salgado, X. Antón; González, Ángel F.; Souto, Carlos; Gilcoto, Miguel; Guerra, Ángel

    2016-02-01

    Circulation patterns of coastal upwelling areas may have central consequences for the abundance and cross-shelf transport of the larval stages of many species. Previous studies have provided evidences that larvae distribution results from a combination of subtidal circulation, species-specific behaviour and larval sources. However, most of these works were conducted on organisms characterised by small-sized and abundant early life phases. Here, we studied the influence of the hydrography and circulation of the Ría de Vigo and adjacent shelf (NW Iberian upwelling system) on the paralarval abundance of two contrasting cephalopods, the benthic common octopus (Octopus vulgaris) and the pelagic squids (Loliginidae). We sampled repeatedly a cross-shore transect during the years 2003-2005 and used zero inflated models to accommodate the scarcity and patchy distribution of cephalopod paralarvae. The probability of catching early stages of both cephalopods was higher at night. Octopus paralarvae were more abundant in the surface layer at night whereas loliginids preferred the bottom layer regardless of the sampling time. Abundance of both cephalopods increased when shelf currents flowed polewards, water temperature was high and water column stability was low. The probability of observing an excess of zero catches decreased during the year for octopus and at high current speed for loliginids. In addition, the circulation pattern conditioned the body size distribution of both paralarvae; while the average size of the captured octopuses increased (decreased) with poleward currents at daylight (nighttime), squids were smaller with poleward currents regardless of the sampling time. These results contribute to the understanding of the effects that the hydrography and subtidal circulation of a coastal upwelling have on the fate of cephalopod early life stages.

  9. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  10. Crystallization of hard spheres revisited. I. Extracting kinetics and free energy landscape from forward flux sampling.

    PubMed

    Richard, David; Speck, Thomas

    2018-03-28

    We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.

  11. Crystallization of hard spheres revisited. I. Extracting kinetics and free energy landscape from forward flux sampling

    NASA Astrophysics Data System (ADS)

    Richard, David; Speck, Thomas

    2018-03-01

    We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.

  12. Almost all quantum channels are equidistant

    NASA Astrophysics Data System (ADS)

    Nechita, Ion; Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2018-05-01

    In this work, we analyze properties of generic quantum channels in the case of large system size. We use random matrix theory and free probability to show that the distance between two independent random channels converges to a constant value as the dimension of the system grows larger. As a measure of the distance we use the diamond norm. In the case of a flat Hilbert-Schmidt distribution on quantum channels, we obtain that the distance converges to 1/2 +2/π , giving also an estimate for the maximum success probability for distinguishing the channels. We also consider the problem of distinguishing two random unitary rotations.

  13. Classical Physics and the Bounds of Quantum Correlations.

    PubMed

    Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán

    2016-06-24

    A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.

  14. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules

    NASA Astrophysics Data System (ADS)

    Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi

    2018-05-01

    Complex networks in different areas exhibit degree distributions with a heavy upper tail. A preferential attachment mechanism in a growth process produces a graph with this feature. We herein investigate a variant of the simple preferential attachment model, whose modifications are interesting for two main reasons: to analyze more realistic models and to study the robustness of the scale-free behavior of the degree distribution. We introduce and study a model which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1 - p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes, i.e. the nodes belonging to a window of size w to the left of the last born node. The latter highlights a trend to select one of the last added nodes when no information is available. The recent nodes can be either a given fixed number or a proportion (αn) of the total number of existing nodes. In the first case, we prove that this model exhibits an asymptotically power-law degree distribution. The same result is then illustrated through simulations in the second case. When the window of recent nodes has a constant size, we herein prove that the presence of the uniform rule delays the starting time from which the asymptotic regime starts to hold. The mean number of nodes of degree k and the asymptotic degree distribution are also determined analytically. Finally, a sensitivity analysis on the parameters of the model is performed.

  15. Anomalous, non-Gaussian tracer diffusion in crowded two-dimensional environments

    NASA Astrophysics Data System (ADS)

    Ghosh, Surya K.; Cherstvy, Andrey G.; Grebenkov, Denis S.; Metzler, Ralf

    2016-01-01

    A topic of intense current investigation pursues the question of how the highly crowded environment of biological cells affects the dynamic properties of passively diffusing particles. Motivated by recent experiments we report results of extensive simulations of the motion of a finite sized tracer particle in a heterogeneously crowded environment made up of quenched distributions of monodisperse crowders of varying sizes in finite circular two-dimensional domains. For given spatial distributions of monodisperse crowders we demonstrate how anomalous diffusion with strongly non-Gaussian features arises in this model system. We investigate both biologically relevant situations of particles released either at the surface of an inner domain or at the outer boundary, exhibiting distinctly different features of the observed anomalous diffusion for heterogeneous distributions of crowders. Specifically we reveal an asymmetric spreading of tracers even at moderate crowding. In addition to the mean squared displacement (MSD) and local diffusion exponent we investigate the magnitude and the amplitude scatter of the time averaged MSD of individual tracer trajectories, the non-Gaussianity parameter, and the van Hove correlation function. We also quantify how the average tracer diffusivity varies with the position in the domain with a heterogeneous radial distribution of crowders and examine the behaviour of the survival probability and the dynamics of the tracer survival probability. Inter alia, the systems we investigate are related to the passive transport of lipid molecules and proteins in two-dimensional crowded membranes or the motion in colloidal solutions or emulsions in effectively two-dimensional geometries, as well as inside supercrowded, surface adhered cells.

  16. Analyzing wildfire exposure and source–sink relationships on a fire prone forest landscape

    Treesearch

    Alan A. Ager; Nicole M. Vaillant; Mark A. Finney; Haiganoush K. Preisler

    2012-01-01

    We used simulation modeling to analyze wildfire exposure to social and ecological values on a 0.6 million ha national forest in central Oregon, USA. We simulated 50,000 wildfires that replicated recent fire events in the area and generated detailed maps of burn probability (BP) and fire intensity distributions. We also recorded the ignition locations and size of each...

  17. Aerial survey methodology for bison population estimation in Yellowstone National Park

    USGS Publications Warehouse

    Hess, Steven C.

    2002-01-01

    I developed aerial survey methods for statistically rigorous bison population estimation in Yellowstone National Park to support sound resource management decisions and to understand bison ecology. Survey protocols, data recording procedures, a geographic framework, and seasonal stratifications were based on field observations from February 1998-September 2000. The reliability of this framework and strata were tested with long-term data from 1970-1997. I simulated different sample survey designs and compared them to high-effort censuses of well-defined large areas to evaluate effort, precision, and bias. Sample survey designs require much effort and extensive information on the current spatial distribution of bison and therefore do not offer any substantial reduction in time and effort over censuses. I conducted concurrent ground surveys, or 'double sampling' to estimate detection probability during aerial surveys. Group size distribution and habitat strongly affected detection probability. In winter, 75% of the groups and 92% of individual bison were detected on average from aircraft, while in summer, 79% of groups and 97% of individual bison were detected. I also used photography to quantify the bias due to counting large groups of bison accurately and found that undercounting increased with group size and could reach 15%. I compared survey conditions between seasons and identified optimal time windows for conducting surveys in both winter and summer. These windows account for the habitats and total area bison occupy, and group size distribution. Bison became increasingly scattered over the Yellowstone region in smaller groups and more occupied unfavorable habitats as winter progressed. Therefore, the best conditions for winter surveys occur early in the season (Dec-Jan). In summer, bison were most spatially aggregated and occurred in the largest groups by early August. Low variability between surveys and high detection probability provide population estimates with an overall coefficient of variation of approximately 8% and have high power for detecting trends in population change. I demonstrated how population estimates from winter and summer can be integrated into a comprehensive monitoring program to estimate annual growth rates, overall winter mortality, and an index of calf production, requiring about 30 hours of flight per year.

  18. Thermodynamic limits on the size and size distribution of nucleic acids synthesized in vitro: the role of pyrophosphate hydrolysis.

    PubMed

    Peller, L

    1977-02-08

    The free-energy change of phosphodiester bond formation from nucleoside triphosphates is more favorable than with nucleoside diphosphates as substrates. Base-stacking interactions can make significant contributions to both delta G degrees ' values. Pyrophosphate hydrolysis when it accompanies the former reaction dominates all thermodynamic considerations. Three experimental situations are discussed in which high-molecular-weight polynucleotides are synthesized without a strong driving force for covalent bond formation. For one of these, a kinetic scheme is presented which encompasses an early narrow Poisson distribution of chain lengths with ultimate passage to a disperse equilibrium population of chain sizes. Hydrolytic removal of pyrophosphate expands the time scale for this undesirable process by a factor of 10(9), while it enormously elevates the thermodynamic ceiling for the average degrees of polymerization in the other two examples. The electron micrographically revealed broad size population from an early study of partial replication of a T7 DNA template is found to adhere (fortuitously) to a disperse most probable representation. Some possible origins are examined for the branched structures in this product, as well as in a later investigation of replication of this nucleic acid. The achievement of both very high molecular weights and sharply peaked size distributions in polynucleotides synthesized in vitro will require coupling to inorganic pyrophosphatase action as in vivo.

  19. Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.

    2005-01-01

    By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.

  20. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  1. Average BER and outage probability of the ground-to-train OWC link in turbulence with rain

    NASA Astrophysics Data System (ADS)

    Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da

    2017-09-01

    The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.

  2. A moment-convergence method for stochastic analysis of biochemical reaction networks.

    PubMed

    Zhang, Jiajun; Nie, Qing; Zhou, Tianshou

    2016-05-21

    Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.

  3. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  4. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  5. A sub-Mercury-sized exoplanet.

    PubMed

    Barclay, Thomas; Rowe, Jason F; Lissauer, Jack J; Huber, Daniel; Fressin, François; Howell, Steve B; Bryson, Stephen T; Chaplin, William J; Désert, Jean-Michel; Lopez, Eric D; Marcy, Geoffrey W; Mullally, Fergal; Ragozzine, Darin; Torres, Guillermo; Adams, Elisabeth R; Agol, Eric; Barrado, David; Basu, Sarbani; Bedding, Timothy R; Buchhave, Lars A; Charbonneau, David; Christiansen, Jessie L; Christensen-Dalsgaard, Jørgen; Ciardi, David; Cochran, William D; Dupree, Andrea K; Elsworth, Yvonne; Everett, Mark; Fischer, Debra A; Ford, Eric B; Fortney, Jonathan J; Geary, John C; Haas, Michael R; Handberg, Rasmus; Hekker, Saskia; Henze, Christopher E; Horch, Elliott; Howard, Andrew W; Hunter, Roger C; Isaacson, Howard; Jenkins, Jon M; Karoff, Christoffer; Kawaler, Steven D; Kjeldsen, Hans; Klaus, Todd C; Latham, David W; Li, Jie; Lillo-Box, Jorge; Lund, Mikkel N; Lundkvist, Mia; Metcalfe, Travis S; Miglio, Andrea; Morris, Robert L; Quintana, Elisa V; Stello, Dennis; Smith, Jeffrey C; Still, Martin; Thompson, Susan E

    2013-02-28

    Since the discovery of the first exoplanets, it has been known that other planetary systems can look quite unlike our own. Until fairly recently, we have been able to probe only the upper range of the planet size distribution, and, since last year, to detect planets that are the size of Earth or somewhat smaller. Hitherto, no planets have been found that are smaller than those we see in the Solar System. Here we report a planet significantly smaller than Mercury. This tiny planet is the innermost of three that orbit the Sun-like host star, which we have designated Kepler-37. Owing to its extremely small size, similar to that of the Moon, and highly irradiated surface, the planet, Kepler-37b, is probably rocky with no atmosphere or water, similar to Mercury.

  6. Size distribution of particle-phase sugar and nitrophenol tracers during severe urban haze episodes in Shanghai

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Jiang, Li; Hoa, Le Phuoc; Lyu, Yan; Xu, Tingting; Yang, Xin; Iinuma, Yoshiteru; Chen, Jianmin; Herrmann, Hartmut

    2016-11-01

    In this study, measurements of size-resolved sugar and nitrophenol concentrations and their distributions during Shanghai haze episodes were performed. The primary goal was to track their possible source categories and investigate the contribution of biological and biomass burning aerosols to urban haze events through regional transport. The results showed that levoglucosan had the highest concentration (40-852 ng m-3) followed by 4-nitrophenol (151-768 ng m-3), sucrose (38-380 ng m-3), 4-nitrocatechol (22-154 ng m-3), and mannitol (5-160 ng m-3). Size distributions exhibited over 90% of levoglucosan and 4-nitrocatechol to the total accumulated in the fine-particle size fraction (<2.1 μm), particularly in heavier haze periods. The back trajectories further supported the fact that levoglucosan was linked to biomass-burning particles, with higher values of associated with air masses passing from biomass burning areas (fire spots) before reaching Shanghai. Other primary saccharide and nitrophenol species showed an unusually large peak in the coarse-mode size fraction (>2.1 μm), which can be correlated with emissions from local sources (biological emission). Principal component analysis (PCA) and positive matrix factorization (PMF) revealed four probable sources (biomass burning: 28%, airborne pollen: 25%, fungal spores: 24%, and combustion emission: 23%) responsible for urban haze events. Taken together, these findings provide useful insight into size-resolved source apportionment analysis via molecular markers for urban haze pollution events in Shanghai.

  7. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  8. To cross or not to cross: modeling wildlife road crossings as a binary response variable with contextual predictors

    USGS Publications Warehouse

    Siers, Shane R.; Reed, Robert N.; Savidge, Julie A.

    2016-01-01

    Roads are significant barriers to landscape-scale movements of individuals or populations of many wildlife taxa. The decision by an animal near a road to either cross or not cross may be influenced by characteristics of the road, environmental conditions, traits of the individual animal, and other aspects of the context within which the decision is made. We considered such factors in a mixed-effects logistic regression model describing the nightly road crossing probabilities of invasive nocturnal Brown Treesnakes (Boiga irregularis) through short-term radiotracking of 691 snakes within close proximity to 50 road segments across the island of Guam. All measures of road magnitude (traffic volume, gap width, surface type, etc.) were significantly negatively correlated with crossing probabilities. Snake body size was the only intrinsic factor associated with crossing rates, with larger snakes crossing roads more frequently. Humidity was the only environmental variable affecting crossing rate. The distance of the snake from the road at the start of nightly movement trials was the most significant predictor of crossings. The presence of snake traps with live mouse lures during a portion of the trials indicated that localized prey cues reduced the probability of a snake crossing the road away from the traps, suggesting that a snake's decision to cross roads is influenced by local foraging opportunities. Per capita road crossing rates of Brown Treesnakes were very low, and comparisons to historical records suggest that crossing rates have declined in the 60+ yr since introduction to Guam. We report a simplified model that will allow managers to predict road crossing rates based on snake, road, and contextual characteristics. Road crossing simulations based on actual snake size distributions demonstrate that populations with size distributions skewed toward larger snakes will result in a higher number of road crossings. Our method of modeling per capita road crossing probabilities as a binary response variable, influenced by contextual factors, may be useful for describing or predicting road crossings by individuals of other taxa provided that appropriate spatial and temporal resolution can be achieved and that potentially influential covariate data can be obtained.

  9. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  10. Crater Topography on Titan: Implications for Landscape Evolution

    NASA Technical Reports Server (NTRS)

    Neish, Catherine D.; Kirk, R.L.; Lorenz, R. D.; Bray, V. J.; Schenk, P.; Stiles, B. W.; Turtle, E.; Mitchell, K.; Hayes, A.

    2013-01-01

    We present a comprehensive review of available crater topography measurements for Saturn's moon Titan. In general, the depths of Titan's craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede's average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 +/- 0.0003 (for the largest crater studied, Menrva, D approximately 425 km) and 0.017 +/- 0.004 (for the smallest crater studied, Ksa, D approximately 39 km). When we evaluate the Anderson-Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan's craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close 'airless' analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan's surface, the only body in the outer Solar System with extensive surface-atmosphere exchange.

  11. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).

  12. Models for the hotspot distribution

    NASA Technical Reports Server (NTRS)

    Jurdy, Donna M.; Stefanick, Michael

    1990-01-01

    Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.

  13. Exact results in the large system size limit for the dynamics of the chemical master equation, a one dimensional chain of equations.

    PubMed

    Martirosyan, A; Saakian, David B

    2011-08-01

    We apply the Hamilton-Jacobi equation (HJE) formalism to solve the dynamics of the chemical master equation (CME). We found exact analytical expressions (in large system-size limit) for the probability distribution, including explicit expression for the dynamics of variance of distribution. We also give the solution for some simple cases of the model with time-dependent rates. We derived the results of the Van Kampen method from the HJE approach using a special ansatz. Using the Van Kampen method, we give a system of ordinary differential equations (ODEs) to define the variance in a two-dimensional case. We performed numerics for the CME with stationary noise. We give analytical criteria for the disappearance of bistability in the case of stationary noise in one-dimensional CMEs.

  14. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  15. Modeling the expected lifetime and evolution of a deme's principal genetic sequence.

    NASA Astrophysics Data System (ADS)

    Clark, Brian

    2014-03-01

    The principal genetic sequence (PGS) is the most common genetic sequence in a deme. The PGS changes over time because new genetic sequences are created by inversions, compete with the current PGS, and a small fraction become PGSs. A set of coupled difference equations provides a description of the evolution of the PGS distribution function in an ensemble of demes. Solving the set of equations produces the survival probability of a new genetic sequence and the expected lifetime of an existing PGS as a function of inversion size and rate, recombination rate, and deme size. Additionally, the PGS distribution function is used to explain the transition pathway from old to new PGSs. We compare these results to a cellular automaton based representation of a deme and the drosophila species, D. melanogaster and D. yakuba.

  16. Spread of information and infection on finite random networks

    NASA Astrophysics Data System (ADS)

    Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar

    2011-04-01

    The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.

  17. Inverse Statistics and Asset Allocation Efficiency

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam

    In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.

  18. Reducing financial avalanches by random investments

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk

    2013-12-01

    Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.

  19. Reducing financial avalanches by random investments.

    PubMed

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk

    2013-12-01

    Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.

  20. Spatial Distribution of Large Cloud Drops

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Larsen, M.; Wiscombe, W.

    2004-01-01

    By analyzing aircraft measurements of individual drop sizes in clouds, we have shown in a companion paper (Knyazikhin et al., 2004) that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)) where 0 less than or equal to D(r) less than or equal to 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and therefore a Poisson distribution of cloud drops, these models show strong drop clustering, the more so the larger the drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics explaining how rain can form so fast. It also helps explain why remotely sensed cloud drop size is generally biased and why clouds absorb more sunlight than conventional radiative transfer models predict.

  1. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  2. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  3. Tolerancing aspheres based on manufacturing knowledge

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Kokot, S.; Fuchs, U.

    2017-10-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  4. Tolerancing aspheres based on manufacturing statistics

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Möhl, A.; Fuchs, U.

    2017-11-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  5. Evolution of the microstructure during the process of consolidation and bonding in soft granular solids.

    PubMed

    Yohannes, B; Gonzalez, M; Abebe, A; Sprockel, O; Nikfar, F; Kiang, S; Cuitiño, A M

    2016-04-30

    The evolution of microstructure during powder compaction process was investigated using a discrete particle modeling, which accounts for particle size distribution and material properties, such as plasticity, elasticity, and inter-particle bonding. The material properties were calibrated based on powder compaction experiments and validated based on tensile strength test experiments for lactose monohydrate and microcrystalline cellulose, which are commonly used excipient in pharmaceutical industry. The probability distribution function and the orientation of contact forces were used to study the evolution of the microstructure during the application of compaction pressure, unloading, and ejection of the compact from the die. The probability distribution function reveals that the compression contact forces increase as the compaction force increases (or the relative density increases), while the maximum value of the tensile contact forces remains the same. During unloading of the compaction pressure, the distribution approaches a normal distribution with a mean value of zero. As the contact forces evolve, the anisotropy of the powder bed also changes. Particularly, during loading, the compression contact forces are aligned along the direction of the compaction pressure, whereas the tensile contact forces are oriented perpendicular to direction of the compaction pressure. After ejection, the contact forces become isotropic. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  7. Discriminative Bayesian Dictionary Learning for Classification.

    PubMed

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  8. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  9. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  10. A Statistical Framework for Microbial Source Attribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velsko, S P; Allen, J E; Cunningham, C T

    2009-04-28

    This report presents a general approach to inferring transmission and source relationships among microbial isolates from their genetic sequences. The outbreak transmission graph (also called the transmission tree or transmission network) is the fundamental structure which determines the statistical distributions relevant to source attribution. The nodes of this graph are infected individuals or aggregated sub-populations of individuals in which transmitted bacteria or viruses undergo clonal expansion, leading to a genetically heterogeneous population. Each edge of the graph represents a transmission event in which one or a small number of bacteria or virions infects another node thus increasing the size ofmore » the transmission network. Recombination and re-assortment events originate in nodes which are common to two distinct networks. In order to calculate the probability that one node was infected by another, given the observed genetic sequences of microbial isolates sampled from them, we require two fundamental probability distributions. The first is the probability of obtaining the observed mutational differences between two isolates given that they are separated by M steps in a transmission network. The second is the probability that two nodes sampled randomly from an outbreak transmission network are separated by M transmission events. We show how these distributions can be obtained from the genetic sequences of isolates obtained by sampling from past outbreaks combined with data from contact tracing studies. Realistic examples are drawn from the SARS outbreak of 2003, the FMDV outbreak in Great Britain in 2001, and HIV transmission cases. The likelihood estimators derived in this report, and the underlying probability distribution functions required to calculate them possess certain compelling general properties in the context of microbial forensics. These include the ability to quantify the significance of a sequence 'match' or 'mismatch' between two isolates; the ability to capture non-intuitive effects of network structure on inferential power, including the 'small world' effect; the insensitivity of inferences to uncertainties in the underlying distributions; and the concept of rescaling, i.e. ability to collapse sub-networks into single nodes and examine transmission inferences on the rescaled network.« less

  11. Magnetic properties in an ash flow tuff with continuous grain size variation: a natural reference for magnetic particle granulometry

    USGS Publications Warehouse

    Till, J.L.; Jackson, M.J.; Rosenbaum, J.G.; Solheid, P.

    2011-01-01

    The Tiva Canyon Tuff contains dispersed nanoscale Fe-Ti-oxide grains with a narrow magnetic grain size distribution, making it an ideal material in which to identify and study grain-size-sensitive magnetic behavior in rocks. A detailed magnetic characterization was performed on samples from the basal 5 m of the tuff. The magnetic materials in this basal section consist primarily of (low-impurity) magnetite in the form of elongated submicron grains exsolved from volcanic glass. Magnetic properties studied include bulk magnetic susceptibility, frequency-dependent and temperature-dependent magnetic susceptibility, anhysteretic remanence acquisition, and hysteresis properties. The combined data constitute a distinct magnetic signature at each stratigraphic level in the section corresponding to different grain size distributions. The inferred magnetic domain state changes progressively upward from superparamagnetic grains near the base to particles with pseudo-single-domain or metastable single-domain characteristics near the top of the sampled section. Direct observations of magnetic grain size confirm that distinct transitions in room temperature magnetic susceptibility and remanence probably denote the limits of stable single-domain behavior in the section. These results provide a unique example of grain-size-dependent magnetic properties in noninteracting particle assemblages over three decades of grain size, including close approximations of ideal Stoner-Wohlfarth assemblages, and may be considered a useful reference for future rock magnetic studies involving grain-size-sensitive properties.

  12. Modeling the Dependency Structure of Integrated Intensity Processes

    PubMed Central

    Ma, Yong-Ki

    2015-01-01

    This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638

  13. Tin Whisker Electrical Short Circuit Characteristics. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Lawrence L.; Wright, Maria C.

    2009-01-01

    Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  14. First assembly times and equilibration in stochastic coagulation-fragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Orsogna, Maria R.; Department of Mathematics, CSUN, Los Angeles, California 91330-8313; Lei, Qi

    2015-07-07

    We develop a fully stochastic theory for coagulation and fragmentation (CF) in a finite system with a maximum cluster size constraint. The process is modeled using a high-dimensional master equation for the probabilities of cluster configurations. For certain realizations of total mass and maximum cluster sizes, we find exact analytical results for the expected equilibrium cluster distributions. If coagulation is fast relative to fragmentation and if the total system mass is indivisible by the mass of the largest allowed cluster, we find a mean cluster-size distribution that is strikingly broader than that predicted by the corresponding mass-action equations. Combinations ofmore » total mass and maximum cluster size under which equilibration is accelerated, eluding late-stage coarsening, are also delineated. Finally, we compute the mean time it takes particles to first assemble into a maximum-sized cluster. Through careful state-space enumeration, the scaling of mean assembly times is derived for all combinations of total mass and maximum cluster size. We find that CF accelerates assembly relative to monomer kinetic only in special cases. All of our results hold in the infinite system limit and can be only derived from a high-dimensional discrete stochastic model, highlighting how classical mass-action models of self-assembly can fail.« less

  15. Food and habitat resource partitioning between three estuarine fish species on the Swedish west coast

    NASA Astrophysics Data System (ADS)

    Thorman, Staffan

    1983-12-01

    In 1978 the food and habitat resource partitioning of three small and common fish species, viz. Pomatoschistus microps (Krøyer), Gasterosteus aculeatus (L.) and Pungitius pungitius (L.) were studied in river Broälven estuary on the Swedish west coast (58°22'N, 11°29'E). The area was divided into three habitats, based on environmental features. In July, September, and October stomach contents and size distribution of each species present were analysed. In July there was high food and habitat overlap between the species. Interference interactions probably occurred between some size classes of P. microps and the other two species. P. pungitius was exposed to both intra- and interspecific interactions. In September the food and habitat overlaps between G. aculeatus and P. pungitius were high, while both had low food and habitat overlaps in relation to P. microps. Interactions between G. aculeatus and P. pungitius were probably influenced by more severe abiotic conditions in one habitat, which caused lower abundances there, and higher abundances in the other two habitats. In October no interactions were observed. These results indicate that competition for food at least temporarily determines the species distribution in a temperate estuary, and that estuarine fish populations are sometimes food limited.

  16. Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.

    2016-01-01

    The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.

  17. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  18. In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2012-12-01

    Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.

  19. Characteristics of Landslide Size Distribution in Response to Different Rainfall Scenarios

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Lan, H.; Li, L.

    2017-12-01

    There have long been controversies on the characteristics of landslide size distribution in response to different rainfall scenarios. For inspecting the characteristics, we have collected a large amount of data, including shallow landslide inventory with landslide areas and landslide occurrence times recorded, and a longtime daily rainfall series fully covering all the landslide occurrences. Three indexes were adopted to quantitatively describe the characteristics of landslide-related rainfall events, which are rainfall duration, rainfall intensity, and the number of rainy days. The first index, rainfall duration, is derived from the exceptional character of a landslide-related rainfall event, which can be explained in terms of the recurrence interval or return period, according to the extreme value theory. The second index, rainfall intensity, is the average rainfall in this duration. The third index is the number of rainy days in this duration. These three indexes were normalized using the standard score method to ensure that they are in the same order of magnitude. Based on these three indexes, landslide-related rainfall events were categorized by a k-means method into four scenarios: moderate rainfall, storm, long-duration rainfall, and long-duration intermittent rainfall. Then, landslides were in turn categorized into four groups according to the scenarios of rainfall events related to them. Inverse-gamma distribution was applied to characterize the area distributions of the four different landslide groups. A tail index and a rollover of the landslide size distribution can be obtained according to the parameters of the distribution. Characteristics of landslide size distribution show that the rollovers of the size distributions of landslides related to storm and long-duration rainfall are larger than those of landslides in the other two groups. It may indicate that the location of rollover may shift right with the increase of rainfall intensity and the extension of rainfall duration. In addition, higher rainfall intensities are prone to trigger larger rainfall-induced landslides since the tail index of landslide area distribution are smaller for higher rainfall intensities, which indicate higher probabilities of large landslides.

  20. Locus of frequency-dependent depression identified with multiple-probability fluctuation analysis at rat climbing fibre-Purkinje cell synapses

    PubMed Central

    Silver, R Angus; Momiyama, Akiko; Cull-Candy, Stuart G

    1998-01-01

    EPSCs were recorded under whole-cell voltage clamp at room temperature from Purkinje cells in slices of cerebellum from 12- to 14-day-old rats. EPSCs from individual climbing fibre (CF) inputs were identified on the basis of their large size, paired-pulse depression and all-or-none appearance in response to a graded stimulus. Synaptic transmission was investigated over a wide range of experimentally imposed release probabilities by analysing fluctuations in the peak of the EPSC. Release probability was manipulated by altering the extracellular [Ca2+] and [Mg2+]. Quantal parameters were estimated from plots of coefficient of variation (CV) or variance against mean conductance by fitting a multinomial model that incorporated both spatial variation in quantal size and non-uniform release probability. This ‘multiple-probability fluctuation’ (MPF) analysis gave an estimate of 510 ± 50 for the number of functional release sites (N) and a quantal size (q) of 0.5 ± 0.03 nS (n = 6). Control experiments, and simulations examining the effects of non-uniform release probability, indicate that MPF analysis provides a reliable estimate of quantal parameters. Direct measurement of quantal amplitudes in the presence of 5 mm Sr2+, which gave asynchronous release, yielded distributions with a mean quantal size of 0.55 ± 0.01 nS and a CV of 0.37 ± 0.01 (n = 4). Similar estimates of q were obtained in 2 mm Ca2+ when release probability was lowered with the calcium channel blocker Cd2+. The non-NMDA receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX; 1 μm) reduced both the evoked current and the quantal size (estimated with MPF analysis) to a similar degree, but did not affect the estimate of N. We used MPF analysis to identify those quantal parameters that change during frequency-dependent depression at climbing fibre-Purkinje cell synaptic connections. At low stimulation frequencies, the mean release probability (P¯r) was unusually high (0.90 ± 0.03 at 0.033 Hz, n = 5), but as the frequency of stimulation was increased, pr fell dramatically (0.02 ± 0.01 at 10 Hz, n = 4) with no apparent change in either q or N. This indicates that the observed 50-fold depression in EPSC amplitude is presynaptic in origin. Presynaptic frequency-dependent depression was investigated with double-pulse and multiple-pulse protocols. EPSC recovery, following simultaneous release at practically all sites, was slow, being well fitted by the sum of two exponential functions (time constants of 0.35 ± 0.09 and 3.2 ± 0.4 s, n = 5). EPSC recovery following sustained stimulation was even slower. We propose that presynaptic depression at CF synapses reflects a slow recovery of release probability following release of each quantum of transmitter. The large number of functional release sites, relatively large quantal size, and unusual dynamics of transmitter release at the CF synapse appear specialized to ensure highly reliable olivocerebellar transmission at low frequencies but to limit transmission at higher frequencies. PMID:9660900

  1. Statistics of voids in hierarchical universes

    NASA Technical Reports Server (NTRS)

    Fry, J. N.

    1986-01-01

    As one alternative to the N-point galaxy correlation function statistics, the distribution of holes or the probability that a volume of given size and shape be empty of galaxies can be considered. The probability of voids resulting from a variety of hierarchical patterns of clustering is considered, and these are compared with the results of numerical simulations and with observations. A scaling relation required by the hierarchical pattern of higher order correlation functions is seen to be obeyed in the simulations, and the numerical results show a clear difference between neutrino models and cold-particle models; voids are more likely in neutrino universes. Observational data do not yet distinguish but are close to being able to distinguish between models.

  2. A stochastic differential equation model for the foraging behavior of fish schools.

    PubMed

    Tạ, Tôn Việt; Nguyen, Linh Thi Hoai

    2018-03-15

    Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.

  3. A Bayesian predictive two-stage design for phase II clinical trials.

    PubMed

    Sambucini, Valeria

    2008-04-15

    In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.

  4. A stochastic differential equation model for the foraging behavior of fish schools

    NASA Astrophysics Data System (ADS)

    Tạ, Tôn ệt, Vi; Hoai Nguyen, Linh Thi

    2018-05-01

    Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.

  5. Cyclic variation in seasonal recruitment and the evolution of the seasonal decline in Ural owl clutch size.

    PubMed Central

    Brommer, Jon E; Pietiäinen, Hannu; Kokko, Hanna

    2002-01-01

    Plastic life-history traits can be viewed as adaptive responses to environmental conditions, described by a reaction norm. In birds, the decline in clutch size with advancing laying date has been viewed as a reaction norm in response to the parent's own (somatic or local environmental) condition and the seasonal decline in its offspring's reproductive value. Theory predicts that differences in the seasonal recruitment are mirrored in the seasonal decrease in clutch size. We tested this prediction in the Ural owl. The owl's main prey, voles, show a cycle of low, increase and peak phases. Recruitment probability had a humped distribution in both increase and peak phases. Average recruitment probability was two to three times higher in the increase phase and declined faster in the latter part of the season when compared with the peak phase. Clutch size decreased twice as steep in the peak (0.1 eggs day-1) as in the increase phase (0.05 eggs day-1). This result appears to refute theoretical predictions of seasonal clutch size declines. However, a re-examination of current theory shows that the predictions of modelling are less robust to details of seasonal condition accumulation in birds than originally thought. The observed pattern can be predicted, assuming specifically shaped seasonal increases in condition across individuals. PMID:11916482

  6. Seasonal variation in size-dependent survival of juvenile Atlantic salmon (Salmo salar): Performance of multistate capture-mark-recapture models

    USGS Publications Warehouse

    Letcher, B.H.; Horton, G.E.

    2008-01-01

    We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.

  7. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  8. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  9. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  10. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  11. Mutant number distribution in an exponentially growing population

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Antal, Tibor

    2015-01-01

    We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.

  12. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  13. The dose response relation for rat spinal cord paralysis analyzed in terms of the effective size of the functional subunit

    NASA Astrophysics Data System (ADS)

    Adamus-Górka, Magdalena; Mavroidis, Panayiotis; Brahme, Anders; Lind, Bengt K.

    2008-11-01

    Radiobiological models for estimating normal tissue complication probability (NTCP) are increasingly used in order to quantify or optimize the clinical outcome of radiation therapy. A good NTCP model should fulfill at least the following two requirements: (a) it should predict the sigmoid shape of the corresponding dose-response curve and (b) it should accurately describe the probability of a specified response for arbitrary non-uniform dose delivery for a given endpoint as accurately as possible, i.e. predict the volume dependence. In recent studies of the volume effect of a rat spinal cord after irradiation with narrow and broad proton beams the authors claim that none of the existing NTCP models is able to describe their results. Published experimental data have been used here to try to quantify the change in the effective dose (D50) causing 50% response for different field sizes. The present study was initiated to describe the induction of white matter necrosis in a rat spinal cord after irradiation with narrow proton beams in terms of the mean dose to the effective volume of the functional subunit (FSU). The physically delivered dose distribution was convolved with a function describing the effective size or, more accurately, the sensitivity distribution of the FSU to obtain the effective mean dose deposited in it. This procedure allows the determination of the mean D50 value of the FSUs of a certain size which is of interest for example if the cell nucleus of the oligodendrocyte is the sensitive target. Using the least-squares method to compare the effective doses for different sizes of the functional subunits with the experimental data the best fit was obtained with a length of about 9 mm. For the non-uniform dose distributions an effective FSU length of 8 mm gave the optimal fit with the probit dose-response model. The method could also be used to interpret the so-called bath and shower experiments where the heterogeneous dose delivery was used in the convolution process. The assumption of an effective FSU size is consistent with most of the effects seen when different portions of the rat spinal cord are irradiated to different doses. The effective FSU length from these experiments is about 8.5 ± 0.5 mm. This length could be interpreted as an effective size of the functional subunits in a rat spinal cord, where multiple myelin sheaths are connected by a single oligodendrocyte and repair is limited by the range of oligodendrocyte progenitor cell diffusion. It was even possible to suggest a more likely than uniform effective FSU sensitivity distribution from the experimental data.

  14. A statics-dynamics equivalence through the fluctuation–dissipation ratio provides a window into the spin-glass phase from nonequilibrium measurements

    PubMed Central

    Baity-Jesi, Marco; Calore, Enrico; Cruz, Andres; Fernandez, Luis Antonio; Gil-Narvión, José Miguel; Gordillo-Guerrero, Antonio; Iñiguez, David; Maiorano, Andrea; Marinari, Enzo; Martin-Mayor, Victor; Monforte-Garcia, Jorge; Muñoz Sudupe, Antonio; Navarro, Denis; Parisi, Giorgio; Perez-Gaviro, Sergio; Ricci-Tersenghi, Federico; Ruiz-Lorenzo, Juan Jesus; Schifano, Sebastiano Fabio; Tarancón, Alfonso; Tripiccione, Raffaele; Yllanes, David

    2017-01-01

    We have performed a very accurate computation of the nonequilibrium fluctuation–dissipation ratio for the 3D Edwards–Anderson Ising spin glass, by means of large-scale simulations on the special-purpose computers Janus and Janus II. This ratio (computed for finite times on very large, effectively infinite, systems) is compared with the equilibrium probability distribution of the spin overlap for finite sizes. Our main result is a quantitative statics-dynamics dictionary, which could allow the experimental exploration of important features of the spin-glass phase without requiring uncontrollable extrapolations to infinite times or system sizes. PMID:28174274

  15. Methodology of Calculation the Terminal Settling Velocity Distribution of Spherical Particles for High Values of the Reynold's Number

    NASA Astrophysics Data System (ADS)

    Surowiak, Agnieszka; Brożek, Marian

    2014-03-01

    The particle settling velocity is the feature of separation in such processes as flowing classification and jigging. It characterizes material forwarded to the separation process and belongs to the so-called complex features because it is the function of particle density and size. i.e. the function of two simple features. The affiliation to a given subset is determined by the values of two properties and the distribution of such feature in a sample is the function of distributions of particle density and size. The knowledge about distribution of particle settling velocity in jigging process is as much important factor as knowledge about particle size distribution in screening or particle density distribution in dense media beneficiation. The paper will present a method of determining the distribution of settling velocity in the sample of spherical particles for the turbulent particle motion in which the settling velocity is expressed by the Newton formula. Because it depends on density and size of particle which are random variable of certain distributions, the settling velocity is a random variable. Applying theorems of probability, concerning distributions function of random variables, the authors present general formula of probability density function of settling velocity for the turbulent motion and particularly calculate probability density function for Weibull's forms of frequency functions of particle size and density. Distribution of settling velocity will calculate numerically and perform in graphical form. The paper presents the simulation of calculation of settling velocity distribution on the basis of real distributions of density and projective diameter of particles assuming that particles are spherical. Prędkość opadania ziarna jest cechą rozdziału w takich procesach przeróbki surowców jak klasyfikacja czy wzbogacanie w osadzarce. Cecha ta opisuje materiał kierowany do procesu rozdziału i należy do tzw. cech złożonych, ze względu na to, że jest funkcją dwóch cech prostych, którymi są: wielkość ziarna i gęstość ziarna. Przynależność do określonego podzbioru ziaren jest określona przez wartość dwóch cech, a rozkład tych cech w próbce jest funkcją rozkładów gęstości i wielkości ziarna. Znajomość rozkładu prędkości opadania ziaren w osadzarce jest istotnym parametrem jak znajomość rozkładu wielkości ziarna w procesie przesiewania czy znajomość rozkładu gęstości w procesie wzbogacania w cieczach ciężkich. W artykule przedstawiono metodykę wyliczania rozkładu prędkości opadania ziaren sferycznych w warunkach ruchu turbulentnego wyrażonego przy pomocy równania Newtona. Zarówno gęstość jak i wielkość ziarna są zmiennymi losowymi o określonych rozkładach. W związku z tym prędkość opadania ziarna jako funkcja cech prostych tj. gęstości i wielkości ziarna będzie również zmienną losową o rozkładzie, który jest funkcją rozkładów argumentów prostych. Wykorzystując twierdzenia rachunku prawdopodobieństwa odnoszące się do rozkładów funkcji zmiennych losowych przedstawiono ogólny wzór na funkcję gęstości rozkładu prędkości opadania w warunkach ruchu turbulentnego. Empiryczne rozkłady wielkości i gęstości ziaren aproksymowano rozkładem Weibulla. Rozkład prędkości opadania wyliczono numerycznie i przedstawiono w postaci graficznej. W artykule przedstawiono symulację wyliczania rozkładu prędkości opadania w oparciu o rzeczywiste rozkłady gęstości i średnicy projekcyjnej ziaren zakładając, że ziarna mają kształt sferyczny.

  16. Ceres and the terrestrial planets impact cratering record

    NASA Astrophysics Data System (ADS)

    Strom, R. G.; Marchi, S.; Malhotra, R.

    2018-03-01

    Dwarf planet Ceres, the largest object in the Main Asteroid Belt, has a surface that exhibits a range of crater densities for a crater diameter range of 5-300 km. In all areas the shape of the craters' size-frequency distribution is very similar to those of the most ancient heavily cratered surfaces on the terrestrial planets. The most heavily cratered terrain on Ceres covers ∼15% of its surface and has a crater density similar to the highest crater density on <1% of the lunar highlands. This region of higher crater density on Ceres probably records the high impact rate at early times and indicates that the other 85% of Ceres was partly resurfaced after the Late Heavy Bombardment (LHB) at ∼4 Ga. The Ceres cratering record strongly indicates that the period of Late Heavy Bombardment originated from an impactor population whose size-frequency distribution resembles that of the Main Belt Asteroids.

  17. Evolution of a Fluctuating Population in a Randomly Switching Environment.

    PubMed

    Wienand, Karl; Frey, Erwin; Mobilia, Mauro

    2017-10-13

    Environment plays a fundamental role in the competition for resources, and hence in the evolution of populations. Here, we study a well-mixed, finite population consisting of two strains competing for the limited resources provided by an environment that randomly switches between states of abundance and scarcity. Assuming that one strain grows slightly faster than the other, we consider two scenarios-one of pure resource competition, and one in which one strain provides a public good-and investigate how environmental randomness (external noise) coupled to demographic (internal) noise determines the population's fixation properties and size distribution. By analytical means and simulations, we show that these coupled sources of noise can significantly enhance the fixation probability of the slower-growing species. We also show that the population size distribution can be unimodal, bimodal, or multimodal and undergoes noise-induced transitions between these regimes when the rate of switching matches the population's growth rate.

  18. Evolution of a Fluctuating Population in a Randomly Switching Environment

    NASA Astrophysics Data System (ADS)

    Wienand, Karl; Frey, Erwin; Mobilia, Mauro

    2017-10-01

    Environment plays a fundamental role in the competition for resources, and hence in the evolution of populations. Here, we study a well-mixed, finite population consisting of two strains competing for the limited resources provided by an environment that randomly switches between states of abundance and scarcity. Assuming that one strain grows slightly faster than the other, we consider two scenarios—one of pure resource competition, and one in which one strain provides a public good—and investigate how environmental randomness (external noise) coupled to demographic (internal) noise determines the population's fixation properties and size distribution. By analytical means and simulations, we show that these coupled sources of noise can significantly enhance the fixation probability of the slower-growing species. We also show that the population size distribution can be unimodal, bimodal, or multimodal and undergoes noise-induced transitions between these regimes when the rate of switching matches the population's growth rate.

  19. Weighted and Clouded Distributions

    DTIC Science & Technology

    1988-02-01

    AND BIRTH ORDER Smart (1963, 1964) and Sprott (1964) examined a number of hypotheses on the incidence of alcoholism in Canadian families using the data...on family size and birth order of 242 alcoholics admitted to three alcoholism clinics in Ontario. The method of sampling is thus of the type...light of the model (6.4). If we assume that birth order has no relationship to becoming an alcoholic, and the probability of an alcoholic being

  20. A moment-convergence method for stochastic analysis of biochemical reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiajun; Nie, Qing; Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn

    Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in termsmore » of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.« less

  1. Geologic implications of the Apollo 14 Fra Mauro breccias and comparison with ejecta from the Ries Crater, Germany

    USGS Publications Warehouse

    Chao, E.C.T.

    1973-01-01

    On the basis of petrographic and laboratory and active seismic data for the Fra Mauro breccias, and by comparison with the nature and distribution of the ejecta from the Ries crater, Germany, some tentative conclusions regarding the geologic significance of the Fra Mauro Formation on the moon can be drawn. The Fra Mauro Formation, as a whole, consists of unwcldcd, porous ejecta, slightly less porous than the regolith. It contains hand-specimen and larger size clasts of strongly annealed complex breccias, partly to slightly annealed breccias, basalts, and perhaps spherule-rich breccias. These clasts are embedded in a matrix of porous aggregate dominated by mineral and breccia fragments and probably largely free of undevitrified glass. All strongly annealed hand-specimen-size breccias are clasts in the Fra Mauro Formation. To account for the porous, unwelded state of the Fra Mauro Formation, the ejecta must have been deposited at a temperature below that required for welding and annealing. Large boulders probably compacted by the Cone crater event occur near the rim of the crater. They probably consist of a similar suite of fragments, but are probably less porous than the formation. The geochronologic clocks of fragments in the Fra Mauro Formation, with textures ranging from unannealed to strongly annealed, were not reset or strongly modified by the Imbrian event. Strongly annealed breccia clasts and basalt clasts are pre-Imbrian, and probably existed as ejecta mixed with basalt flows in the Imbrium Basin prior to the Imbrian event. The Imbrian event probably occurred between 3.90 or 3.88 and 3.65 b.y. ago.

  2. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  3. Reducing Capacities and Distribution of Redox-Active Functional Groups in Low Molecular Weight Fractions of Humic Acids.

    PubMed

    Yang, Zhen; Kappler, Andreas; Jiang, Jie

    2016-11-15

    Humic substances (HS) are redox-active organic compounds with a broad spectrum of molecular sizes and reducing capacities, that is, number of electrons donated or accepted. However, it is unknown which role the distribution of redox-active functional groups in different molecule sizes plays for HS redox reactions in varying pore sizes microenvironments. We used dialysis experiments to separate bulk humic acids (HA) into low molecular weight fractions (LMWF) and retentate, for example, the remaining HA in the dialysis bag. LMWF accounted for only 2% of the total organic carbon content of the HA. However, their reducing capacities per gram of carbon were up to 33 times greater than either those of the bulk HA or the retentate. For a structural/mechanistic understanding of the high reducing capacity of the LMWF, we used fluorescence spectroscopy. We found that the LWMF showed significant fluorescence intensities for quinone-like functional groups, as indicated by the quinoid π-π* transition, that are probably responsible for the high reducing capacities. Therefore, the small-sized HS fraction can play a major role for redox transformation of metals or pollutants trapped in soil micropores (<2.5 nm diameter).

  4. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  5. Population size influences amphibian detection probability: implications for biodiversity monitoring programs.

    PubMed

    Tanadini, Lorenzo G; Schmidt, Benedikt R

    2011-01-01

    Monitoring is an integral part of species conservation. Monitoring programs must take imperfect detection of species into account in order to be reliable. Theory suggests that detection probability may be determined by population size but this relationship has not yet been assessed empirically. Population size is particularly important because it may induce heterogeneity in detection probability and thereby cause bias in estimates of biodiversity. We used a site occupancy model to analyse data from a volunteer-based amphibian monitoring program to assess how well different variables explain variation in detection probability. An index to population size best explained detection probabilities for four out of six species (to avoid circular reasoning, we used the count of individuals at a previous site visit as an index to current population size). The relationship between the population index and detection probability was positive. Commonly used weather variables best explained detection probabilities for two out of six species. Estimates of site occupancy probabilities differed depending on whether the population index was or was not used to model detection probability. The relationship between the population index and detectability has implications for the design of monitoring and species conservation. Most importantly, because many small populations are likely to be overlooked, monitoring programs should be designed in such a way that small populations are not overlooked. The results also imply that methods cannot be standardized in such a way that detection probabilities are constant. As we have shown here, one can easily account for variation in population size in the analysis of data from long-term monitoring programs by using counts of individuals from surveys at the same site in previous years. Accounting for variation in population size is important because it can affect the results of long-term monitoring programs and ultimately the conservation of imperiled species.

  6. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    PubMed

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (<31 μm, depending on the used grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  7. Bayesian predictive power: choice of prior and some recommendations for its use as probability of success in drug development.

    PubMed

    Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus

    2016-09-01

    Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Robustness of survival estimates for radio-marked animals

    USGS Publications Warehouse

    Bunck, C.M.; Chen, C.-L.

    1992-01-01

    Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.

  9. Floe-size distributions in laboratory ice broken by waves

    NASA Astrophysics Data System (ADS)

    Herman, Agnieszka; Evers, Karl-Ulrich; Reimer, Nils

    2018-02-01

    This paper presents the analysis of floe-size distribution (FSD) data obtained in laboratory experiments of ice breaking by waves. The experiments, performed at the Large Ice Model Basin (LIMB) of the Hamburg Ship Model Basin (Hamburgische Schiffbau-Versuchsanstalt, HSVA), consisted of a number of tests in which an initially continuous, uniform ice sheet was broken by regular waves with prescribed characteristics. The floes' characteristics (surface area; minor and major axis, and orientation of equivalent ellipse) were obtained from digital images of the ice sheets after five tests. The analysis shows that although the floe sizes cover a wide range of values (up to 5 orders of magnitude in the case of floe surface area), their probability density functions (PDFs) do not have heavy tails, but exhibit a clear cut-off at large floe sizes. Moreover, the PDFs have a maximum that can be attributed to wave-induced flexural strain, producing preferred floe sizes. It is demonstrated that the observed FSD data can be described by theoretical PDFs expressed as a weighted sum of two components, a tapered power law and a Gaussian, reflecting multiple fracture mechanisms contributing to the FSD as it evolves in time. The results are discussed in the context of theoretical and numerical research on fragmentation of sea ice and other brittle materials.

  10. Optimized lower leg injury probability curves from post-mortem human subject tests under axial impacts

    PubMed Central

    Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko

    2015-01-01

    Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381

  11. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  12. Robust state transfer in the quantum spin channel via weak measurement and quantum measurement reversal

    NASA Astrophysics Data System (ADS)

    He, Zhi; Yao, Chunmei; Zou, Jian

    2013-10-01

    Using the weak measurement (WM) and quantum measurement reversal (QMR) approach, robust state transfer and entanglement distribution can be realized in the spin-(1)/(2) Heisenberg chain. We find that the ultrahigh fidelity and long distance of quantum state transfer with certain success probability can be obtained using proper WM and QMR, i.e., the average fidelity of a general pure state from 80% to almost 100%, which is almost size independent. We also find that the distance and quality of entanglement distribution for the Bell state and the general Werner mixed state can be obviously improved by the WM and QMR approach.

  13. Prediction of slant path rain attenuation statistics at various locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1977-01-01

    The paper describes a method for predicting slant path attenuation statistics at arbitrary locations for variable frequencies and path elevation angles. The method involves the use of median reflectivity factor-height profiles measured with radar as well as the use of long-term point rain rate data and assumed or measured drop size distributions. The attenuation coefficient due to cloud liquid water in the presence of rain is also considered. Absolute probability fade distributions are compared for eight cases: Maryland (15 GHz), Texas (30 GHz), Slough, England (19 and 37 GHz), Fayetteville, North Carolina (13 and 18 GHz), and Cambridge, Massachusetts (13 and 18 GHz).

  14. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects

    PubMed Central

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993

  15. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.

    PubMed

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.

  16. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  17. Crater topography on Titan: implications for landscape evolution

    USGS Publications Warehouse

    Neish, Catherine D.; Kirk, R.L.; Lorenz, R.D.; Bray, V.J.; Schenk, P.; Stiles, B.W.; Turtle, E.; Mitchell, Ken; Hayes, A.

    2013-01-01

    We present a comprehensive review of available crater topography measurements for Saturn’s moon Titan. In general, the depths of Titan’s craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede’s average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 ± 0.0003 (for the largest crater studied, Menrva, D ~ 425 km) and 0.017 ± 0.004 (for the smallest crater studied, Ksa, D ~ 39 km). When we evaluate the Anderson–Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan’s craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close ‘airless’ analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan’s surface, the only body in the outer Solar System with extensive surface–atmosphere exchange.

  18. Two coupled, driven Ising spin systems working as an engine.

    PubMed

    Basu, Debarshi; Nandi, Joydip; Jayannavar, A M; Marathe, Rahul

    2017-05-01

    Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.

  19. An Optimization-Based Framework for the Transformation of Incomplete Biological Knowledge into a Probabilistic Structure and Its Application to the Utilization of Gene/Protein Signaling Pathways in Discrete Phenotype Classification.

    PubMed

    Esfahani, Mohammad Shahrokh; Dougherty, Edward R

    2015-01-01

    Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.

  20. Two coupled, driven Ising spin systems working as an engine

    NASA Astrophysics Data System (ADS)

    Basu, Debarshi; Nandi, Joydip; Jayannavar, A. M.; Marathe, Rahul

    2017-05-01

    Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.

  1. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  2. Using occupancy modeling and logistic regression to assess the distribution of shrimp species in lowland streams, Costa Rica: Does regional groundwater create favorable habitat?

    USGS Publications Warehouse

    Snyder, Marcia; Freeman, Mary C.; Purucker, S. Thomas; Pringle, Catherine M.

    2016-01-01

    Freshwater shrimps are an important biotic component of tropical ecosystems. However, they can have a low probability of detection when abundances are low. We sampled 3 of the most common freshwater shrimp species, Macrobrachium olfersii, Macrobrachium carcinus, and Macrobrachium heterochirus, and used occupancy modeling and logistic regression models to improve our limited knowledge of distribution of these cryptic species by investigating both local- and landscape-scale effects at La Selva Biological Station in Costa Rica. Local-scale factors included substrate type and stream size, and landscape-scale factors included presence or absence of regional groundwater inputs. Capture rates for 2 of the sampled species (M. olfersii and M. carcinus) were sufficient to compare the fit of occupancy models. Occupancy models did not converge for M. heterochirus, but M. heterochirus had high enough occupancy rates that logistic regression could be used to model the relationship between occupancy rates and predictors. The best-supported models for M. olfersii and M. carcinus included conductivity, discharge, and substrate parameters. Stream size was positively correlated with occupancy rates of all 3 species. High stream conductivity, which reflects the quantity of regional groundwater input into the stream, was positively correlated with M. olfersii occupancy rates. Boulder substrates increased occupancy rate of M. carcinus and decreased the detection probability of M. olfersii. Our models suggest that shrimp distribution is driven by factors that function at local (substrate and discharge) and landscape (conductivity) scales.

  3. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  4. Spatial and seasonal dynamic of abundance and distribution of guanaco and livestock: insights from using density surface and null models.

    PubMed

    Schroeder, Natalia M; Matteucci, Silvia D; Moreno, Pablo G; Gregorio, Pablo; Ovejero, Ramiro; Taraborelli, Paula; Carmanchahi, Pablo D

    2014-01-01

    Monitoring species abundance and distribution is a prerequisite when assessing species status and population viability, a difficult task to achieve for large herbivores at ecologically meaningful scales. Co-occurrence patterns can be used to infer mechanisms of community organization (such as biotic interactions), although it has been traditionally applied to binary presence/absence data. Here, we combine density surface and null models of abundance data as a novel approach to analyze the spatial and seasonal dynamics of abundance and distribution of guanacos (Lama guanicoe) and domestic herbivores in northern Patagonia, in order to visually and analytically compare the dispersion and co-occurrence pattern of ungulates. We found a marked seasonal pattern in abundance and spatial distribution of L. guanicoe. The guanaco population reached its maximum annual size and spatial dispersion in spring-summer, decreasing up to 6.5 times in size and occupying few sites of the study area in fall-winter. These results are evidence of the seasonal migration process of guanaco populations, an increasingly rare event for terrestrial mammals worldwide. The maximum number of guanacos estimated for spring (25,951) is higher than the total population size (10,000) 20 years ago, probably due to both counting methodology and population growth. Livestock were mostly distributed near human settlements, as expected by the sedentary management practiced by local people. Herbivore distribution was non-random; i.e., guanaco and livestock abundances co-varied negatively in all seasons, more than expected by chance. Segregation degree of guanaco and small-livestock (goats and sheep) was comparatively stronger than that of guanaco and large-livestock, suggesting a competition mechanism between ecologically similar herbivores, although various environmental factors could also contribute to habitat segregation. The new and compelling combination of methods used here is highly useful for researchers who conduct counts of animals to simultaneously estimate population sizes, distributions, assess temporal trends and characterize multi-species spatial interactions.

  5. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  6. Total coliform and E. coli in public water systems using undisinfected ground water in the United States.

    PubMed

    Messner, Michael J; Berger, Philip; Javier, Julie

    2017-06-01

    Public water systems (PWSs) in the United States generate total coliform (TC) and Escherichia coli (EC) monitoring data, as required by the Total Coliform Rule (TCR). We analyzed data generated in 2011 by approximately 38,000 small (serving fewer than 4101 individuals) undisinfected public water systems (PWSs). We used statistical modeling to characterize a distribution of TC detection probabilities for each of nine groupings of PWSs based on system type (community, non-transient non-community, and transient non-community) and population served (less than 101, 101-1000 and 1001-4100 people). We found that among PWS types sampled in 2011, on average, undisinfected transient PWSs test positive for TC 4.3% of the time as compared with 3% for undisinfected non-transient PWSs and 2.5% for undisinfected community PWSs. Within each type of PWS, the smaller systems have higher median TC detection than the larger systems. All TC-positive samples were assayed for EC. Among TC-positive samples from small undisinfected PWSs, EC is detected in about 5% of samples, regardless of PWS type or size. We evaluated the upper tail of the TC detection probability distributions and found that significant percentages of some system types have high TC detection probabilities. For example, assuming the systems providing data are nationally-representative, then 5.0% of the ∼50,000 small undisinfected transient PWSs in the U.S. have TC detection probabilities of 20% or more. Communities with such high TC detection probabilities may have elevated risk of acute gastrointestinal (AGI) illness - perhaps as great or greater than the attributable risk to drinking water (6-22%) calculated for 14 Wisconsin community PWSs with much lower TC detection probabilities (about 2.3%, Borchardt et al., 2012). Published by Elsevier GmbH.

  7. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  8. Analysis of scattering statistics and governing distribution functions in optical coherence tomography.

    PubMed

    Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-07-01

    The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.

  9. FDR doesn't Tell the Whole Story: Joint Influence of Effect Size and Covariance Structure on the Distribution of the False Discovery Proportions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.

  10. Rare events in stochastic populations under bursty reproduction

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2016-11-01

    Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.

  11. Changes in Arctic Sea Ice Floe Size Distribution in the Marginal Ice Zone in a Thickness and Floe Size Distribution Model

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Stern, H. L., III; Hwang, P. B.; Schweiger, A. J. B.; Stark, M.; Steele, M.

    2015-12-01

    To better describe the state of sea ice in the marginal ice zone (MIZ) with floes of varying thicknesses and sizes, both an ice thickness distribution (ITD) and a floe size distribution (FSD) are needed. We have developed a FSD theory [Zhang et al., 2015] that is coupled to the ITD theory of Thorndike et al. [1975] in order to explicitly simulate the evolution of FSD and ITD jointly. The FSD theory includes a FSD function and a FSD conservation equation in parallel with the ITD equation. The FSD equation takes into account changes in FSD due to ice advection, thermodynamic growth, and lateral melting. It also includes changes in FSD because of mechanical redistribution of floe size due to ice opening, ridging and, particularly, ice fragmentation induced by stochastic ocean surface waves. The floe size redistribution due to ice fragmentation is based on the assumption that wave-induced breakup is a random process such that when an ice floe is broken, floes of any smaller sizes have an equal opportunity to form, without being either favored or excluded. It is also based on the assumption that floes of larger sizes are easier to break because they are subject to larger flexure-induced stresses and strains than smaller floes that are easier to ride with waves with little bending; larger floes also have higher areal coverages and therefore higher probabilities to break. These assumptions with corresponding formulations ensure that the simulated FSD follows a power law as observed by satellites and airborne surveys. The FSD theory has been tested in the Pan-arctic Ice/Ocean Modeling and Assimilation System (PIOMAS). The existing PIOMAS has 12 categories each for ice thickness, ice enthalpy, and snow depth. With the implementation of the FSD theory, PIOMAS is able to represent 12 categories of floe sizes ranging from 0.1 m to ~3000 m. It is found that the simulated 12-category FSD agrees reasonably well with FSD derived from SAR and MODIS images. In this study, we will examine PIOMAS-estimated variability and changes in Arctic FSD over the period 1979-present. Thorndike, A. S., D. A. Rothrock, G. A. Maykut, and R. Colony, The thickness distribution of sea ice. J. Geophys. Res., 80, 1975. Zhang, J., A. Schweiger, M. Steele, and H. Stern, Sea ice floe size distribution in the marginal ice zone: Theory and numerical experiments, J. Geophys. Res., 120, 2015.

  12. Correcting length-frequency distributions for imperfect detection

    USGS Publications Warehouse

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.

  13. Stochastic modelling for lake thermokarst and peatland patterns in permafrost and near permafrost zones

    NASA Astrophysics Data System (ADS)

    Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey

    2017-04-01

    Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-

  14. Some aspects of resource uncertainty and their economic consequences in assessment of the 1002 area of the Arctic National Wildlife Refuge

    USGS Publications Warehouse

    Attanasi, E.D.; Schuenemeyer, J.H.

    2002-01-01

    Exploration ventures in frontier areas have high risks. Before committing to them, firms prepare regional resource assessments to evaluate the potential payoffs. With no historical basis for directly estimating size distribution of undiscovered accumulations, reservoir attribute probability distributions can be assessed subjectively and used to project undiscovered accumulation sizes. Three questions considered here are: (1) what distributions should be used to characterize the subjective assessments of reservoir attributes, (2) how parsimonious can the analyst be when eliciting subjective information from the assessment geologist, and (3) what are consequences of ignoring dependencies among reservoir attributes? The standard or norm used for comparing outcomes is the computed cost function describing costs of finding, developing, and producing undiscovered oil accumulations. These questions are examined in the context of the US Geological Survey's recently published regional assessment of the 1002 Area of the Arctic National Wildlife Refuge, Alaska. We study effects of using the various common distributions to characterize the geologist's subjective distributions representing reservoir attributes. Specific findings show that triangular distributions result in substantial bias in economic forecasts when used to characterize skewed distributions. Moreover, some forms of the lognormal distribution also result in biased economic inferences. Alternatively, we generally determined four fractiles (100, 50, 5, 0) to be sufficient to capture essential economic characteristics of the underlying attribute distributions. Ignoring actual dependencies among reservoir attributes biases the economic evaluation. ?? 2002 International Association for Mathematical Geology.

  15. Applications of finite-size scaling for atomic and non-equilibrium systems

    NASA Astrophysics Data System (ADS)

    Antillon, Edwin A.

    We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.

  16. Rheological Behavior, Granule Size Distribution and Differential Scanning Calorimetry of Cross-Linked Banana (Musa paradisiaca) Starch.

    NASA Astrophysics Data System (ADS)

    Núñez-Santiago, María C.; Maristany-Cáceres, Amira J.; Suárez, Francisco J. García; Bello-Pérez, Arturo

    2008-07-01

    Rheological behavior at 60 °C, granule size distribution and Differential Scanning Calorimetry (DSC) tests were employed to study the effect of diverse reaction conditions: adipic acid concentration, pH and temperature during cross-linking of banana (Musa paradisiaca) starch. These properties were determined in native banana starch pastes for the purpose of comparison. Rheological behavior from pastes of cross-linked starch at 60 °C did not show hysteresis, probably due the cross-linkage of starch that avoided disruption of granules, elsewhere, native starch showed hysteresis in a thixotropic loop. All pastes exhibited non-Newtonian shear thinning behavior. In all cases, size distribution showed a decrease in the median diameter in cross-linked starches. This condition produces a decrease in swelling capacity of cross-linked starch. The median diameter decreased with an increase of acid adipic concentration; however, an increase of pH and Temperature produced an increase in this variable. Finally, an increase in gelatinization temperature and entalphy (ΔH) were observed as an effect of cross-linkage. An increase in acid adipic concentration produced an increase in Tonset and a decrease in ΔH. pH and temperature. The cross-linked of banana starch produced granules more resistant during the pasting procedure.

  17. What does reflection from cloud sides tell us about vertical distribution of cloud droplets?

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Kaufman, Yoram; Martins, V.; Zubko, Victor

    2006-01-01

    In order to accurately measure the interaction of clouds with aerosols, we have to resolve the vertical distribution of cloud droplet sizes and determine the temperature of glaciation for clean and polluted clouds. Knowledge of the droplet vertical profile is also essential for understanding precipitation. So far, all existing satellites either measure cloud microphysics only at cloud top (e.g., MODIS) or give a vertical profile of precipitation sized droplets (e.g., Cloudsat). What if one measures cloud microphysical properties in the vertical by retrieving them from the solar and infrared radiation reflected or emitted from cloud sides? This was the idea behind CLAIM-3D (A 3D - cloud aerosol interaction mission) recently proposed by NASA GSFC. This presentation will focus on the interpretation of the radiation reflected from cloud sides. In contrast to plane-parallel approximation, a conventional approach to all current operational retrievals, 3D radiative transfer will be used for interpreting the observed reflectances. As a proof of concept, we will show a few examples of radiation reflected from cloud fields generated by a simple stochastic cloud model with prescribed microphysics. Instead of fixed values of the retrieved effective radii, the probability density functions of droplet size distributions will serve as possible retrievals.

  18. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006. Growth-collapse and decay-surge evolutions, and geometric Langevin equations, Physica A, 367, 106 - 128.

  19. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  20. SEDPAK—A comprehensive operational system and data-processing package in APPLESOFT BASIC for a settling tube, sediment analyzer

    NASA Astrophysics Data System (ADS)

    Goldbery, R.; Tehori, O.

    SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.

  1. Twelve- to 14-Month-Old Infants Can Predict Single-Event Probability with Large Set Sizes

    ERIC Educational Resources Information Center

    Denison, Stephanie; Xu, Fei

    2010-01-01

    Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas "et al.", 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A…

  2. Eternal inflation, bubble collisions, and the disintegration of the persistence of memory

    NASA Astrophysics Data System (ADS)

    Freivogel, Ben; Kleban, Matthew; Nicolis, Alberto; Sigurdson, Kris

    2009-08-01

    We compute the probability distribution for bubble collisions in an inflating false vacuum which decays by bubble nucleation. Our analysis generalizes previous work of Guth, Garriga, and Vilenkin to the case of general cosmological evolution inside the bubble, and takes into account the dynamics of the domain walls that form between the colliding bubbles. We find that incorporating these effects changes the results dramatically: the total expected number of bubble collisions in the past lightcone of a typical observer is N ~ γ Vf/Vi , where γ is the fastest decay rate of the false vacuum, Vf is its vacuum energy, and Vi is the vacuum energy during inflation inside the bubble. This number can be large in realistic models without tuning. In addition, we calculate the angular position and size distribution of the collisions on the cosmic microwave background sky, and demonstrate that the number of bubbles of observable angular size is NLS ~ (Ωk)1/2N, where Ωk is the curvature contribution to the total density at the time of observation. The distribution is almost exactly isotropic.

  3. Avalanches and power-law behaviour in lung inflation

    NASA Astrophysics Data System (ADS)

    Suki, Béla; Barabási, Albert-László; Hantos, Zoltán; Peták, Ferenc; Stanley, H. Eugene

    1994-04-01

    WHEN lungs are emptied during exhalation, peripheral airways close up1. For people with lung disease, they may not reopen for a significant portion of inhalation, impairing gas exchange2,3. A knowledge of the mechanisms that govern reinflation of collapsed regions of lungs is therefore central to the development of ventilation strategies for combating respiratory problems. Here we report measurements of the terminal airway resistance, Rt , during the opening of isolated dog lungs. When inflated by a constant flow, Rt decreases in discrete jumps. We find that the probability distribution of the sizes of the jumps and of the time intervals between them exhibit power-law behaviour over two decades. We develop a model of the inflation process in which 'avalanches' of airway openings are seen-with power-law distributions of both the size of avalanches and the time intervals between them-which agree quantitatively with those seen experimentally, and are reminiscent of the power-law behaviour observed for self-organized critical systems4. Thus power-law distributions, arising from avalanches associated with threshold phenomena propagating down a branching tree structure, appear to govern the recruitment of terminal airspaces.

  4. Methodology for finding and evaluating safe landing sites on small bodies

    NASA Astrophysics Data System (ADS)

    Rodgers, Douglas J.; Ernst, Carolyn M.; Barnouin, Olivier S.; Murchie, Scott L.; Chabot, Nancy L.

    2016-12-01

    Here we develop and demonstrate a three-step strategy for finding a safe landing ellipse for a legged spacecraft on a small body such as an asteroid or planetary satellite. The first step, acquisition of a high-resolution terrain model of a candidate landing region, is simulated using existing statistics on block abundances measured at Phobos, Eros, and Itokawa. The synthetic terrain model is generated by randomly placing hemispheric shaped blocks with the empirically determined size-frequency distribution. The resulting terrain is much rockier than typical lunar or martian landing sites. The second step, locating a landing ellipse with minimal hazards, is demonstrated for an assumed approach to landing that uses Autonomous Landing and Hazard Avoidance Technology. The final step, determination of the probability distribution for orientation of the landed spacecraft, is demonstrated for cases of differing regional slope. The strategy described here is both a prototype for finding a landing site during a flight mission and provides tools for evaluating the design of small-body landers. We show that for bodies with Eros-like block distributions, there may be >99% probability of landing stably at a low tilt without blocks impinging on spacecraft structures so as to pose a survival hazard.

  5. Statistical thermodynamics of amphiphile chains in micelles

    PubMed Central

    Ben-Shaul, A.; Szleifer, I.; Gelbart, W. M.

    1984-01-01

    The probability distribution of amphiphile chain conformations in micelles of different geometries is derived through maximization of their packing entropy. A lattice model, first suggested by Dill and Flory, is used to represent the possible chain conformations in the micellar core. The polar heads of the chains are assumed to be anchored to the micellar surface, with the other chain segments occupying all lattice sites in the interior of the micelle. This “volume-filling” requirement, the connectivity of the chains, and the geometry of the micelle define constraints on the possible probability distributions of chain conformations. The actual distribution is derived by maximizing the chain's entropy subject to these constraints; “reversals” of the chains back towards the micellar surface are explicitly included. Results are presented for amphiphiles organized in planar bilayers and in cylindrical and spherical micelles of different sizes. It is found that, for all three geometries, the bond order parameters decrease as a function of the bond distance from the polar head, in accordance with recent experimental data. The entropy differences associated with geometrical changes are shown to be significant, suggesting thereby the need to include curvature (environmental)-dependent “tail” contributions in statistical thermodynamic treatments of micellization. PMID:16593492

  6. Probability Formulas for Describing Fragment Size Distributions

    DTIC Science & Technology

    1981-06-01

    L)RCDMD-ST 5001 EisenhowerAvenue Alexandria,VA 22333 Commander US Amy MaterielDevelopment G ReadinessCommand ATTN: DRCDL 5001EisenhowerAvenue...Sieling Natick,MA 01762 CoWander US Amy Tank Automotive DevelopmentCommand ATTN: DRDTA-UL Warren,MI 48090 1 1 1 1 1 Organization Commander US Army...ATTN: D.R. Garrison 3 A. Wilner Bethesda,MD 20084 Commander 1 NavalSurfaceWeaponsCenter ATTN: Code TEB, D. W. Colberts ~n Mr. S. Hock Code TX, Dr. W.G

  7. Physical interrelation between Fokker-Planck and random walk models with application to Coulomb interactions.

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1971-01-01

    A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.

  8. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  9. Aerosol properties computed from aircraft-based observations during the ACE- Asia campaign. 2; A case study of lidar ratio closure and aerosol radiative effects

    NASA Technical Reports Server (NTRS)

    Kuzmanoski, Maja; Box, M. A.; Schmid, B.; Box, G. P.; Wang, J.; Russell, P. B.; Bates, D.; Jonsson, H. H.; Welton, Ellsworth J.; Flagan, R. C.

    2005-01-01

    For a vertical profile with three distinct layers (marine boundary, pollution and dust), observed during the ACE-Asia campaign, we carried out a comparison between the modeled lidar ratio vertical profile and that obtained from collocated airborne NASA AATS-14 sunphotometer and shipborne Micro-Pulse Lidar (MPL) measurements. Vertically resolved lidar ratio was calculated from two size distribution vertical profiles - one obtained by inversion of sunphotometer-derived extinction spectra, and one measured in-situ - combined with the same refractive index model based on aerosol chemical composition. The aerosol model implies single scattering albedos of 0.78 - 0.81 and 0.93 - 0.96 at 0.523 microns (the wavelength of the lidar measurements), in the pollution and dust layers, respectively. The lidar ratios calculated from the two size distribution profiles have close values in the dust layer; they are however, significantly lower than the lidar ratios derived from combined lidar and sunphotometer measurements, most probably due to the use of a simple nonspherical model with a single particle shape in our calculations. In the pollution layer, the two size distribution profiles yield generally different lidar ratios. The retrieved size distributions yield a lidar ratio which is in better agreement with that derived from lidar/sunphotometer measurements in this layer, with still large differences at certain altitudes (the largest relative difference was 46%). We explain these differences by non-uniqueness of the result of the size distribution retrieval and lack of information on vertical variability of particle refractive index. Radiative transfer calculations for this profile showed significant atmospheric radiative forcing, which occurred mainly in the pollution layer. We demonstrate that if the extinction profile is known then information on the vertical structure of absorption and asymmetry parameter is not significant for estimating forcing at TOA and the surface, while it is of importance for estimating vertical profiles of radiative forcing and heating rates.

  10. Impact and Cratering History of the Pluto System

    NASA Astrophysics Data System (ADS)

    Greenstreet, Sarah; Gladman, Brett; McKinnon, William B.

    2014-11-01

    The observational opportunity of the New Horizons spacecraft fly-through of the Pluto system in July 2015 requires a current understanding of the Kuiper belt dynamical sub-populations to accurately interpret the cratering history of the surfaces of Pluto and its satellites. We use an Opik-style collision probability code to compute impact rates and impact velocity distributions onto Pluto and its binary companion Charon from the Canada-France Ecliptic Plane Survey (CFEPS) model of classical and resonant Kuiper belt populations (Petit et al., 2011; Gladman et al., 2012) and the scattering model of Kaib et al. (2011) calibrated to Shankman et al. (2013). Due to the uncertainty in how the well-characterized size distribution for Kuiper belt objects (with diameter d>100 km) connects to smaller objects, we compute cratering rates using three simple impactor size distribution extrapolations (a single power-law, a power-law with a knee, and a power-law with a divot) as well as the "curvy" impactor size distributions from Minton et al. (2012) and Schlichting et al. (2013). Current size distribution uncertainties cause absolute ages computed for Pluto surfaces to be entirely dependent on the extrapolation to small sizes and thus uncertain to a factor of approximately 6. We illustrate the relative importance of each Kuiper belt sub-population to Pluto's cratering rate, both now and integrated into the past, and provide crater retention ages for several cases. We find there is only a small chance a crater with diameter D>200 km has been created on Pluto in the past 4 Gyr. The 2015 New Horizons fly-through coupled with telescope surveys that cover objects with diameters d=10-100 km should eventually drop current crater retention age uncertainties on Pluto to <30%. In addition, we compute the "disruption timescale" (to a factor of three accuracy) for Pluto's smaller satellites: Styx, Nix, Kerberos, and Hydra.

  11. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  12. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  13. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  14. An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood

    NASA Astrophysics Data System (ADS)

    Dinh, Khanh N.; Sidje, Roger B.

    2017-12-01

    Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.

  15. A Weighted Configuration Model and Inhomogeneous Epidemics

    NASA Astrophysics Data System (ADS)

    Britton, Tom; Deijfen, Maria; Liljeros, Fredrik

    2011-12-01

    A random graph model with prescribed degree distribution and degree dependent edge weights is introduced. Each vertex is independently equipped with a random number of half-edges and each half-edge is assigned an integer valued weight according to a distribution that is allowed to depend on the degree of its vertex. Half-edges with the same weight are then paired randomly to create edges. An expression for the threshold for the appearance of a giant component in the resulting graph is derived using results on multi-type branching processes. The same technique also gives an expression for the basic reproduction number for an epidemic on the graph where the probability that a certain edge is used for transmission is a function of the edge weight (reflecting how closely `connected' the corresponding vertices are). It is demonstrated that, if vertices with large degree tend to have large (small) weights on their edges and if the transmission probability increases with the edge weight, then it is easier (harder) for the epidemic to take off compared to a randomized epidemic with the same degree and weight distribution. A recipe for calculating the probability of a large outbreak in the epidemic and the size of such an outbreak is also given. Finally, the model is fitted to three empirical weighted networks of importance for the spread of contagious diseases and it is shown that R 0 can be substantially over- or underestimated if the correlation between degree and weight is not taken into account.

  16. Predictions from star formation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Leichenauer, Stefan

    2010-03-15

    We compute trivariate probability distributions in the landscape, scanning simultaneously over the cosmological constant, the primordial density contrast, and spatial curvature. We consider two different measures for regulating the divergences of eternal inflation, and three different models for observers. In one model, observers are assumed to arise in proportion to the entropy produced by stars; in the others, they arise at a fixed time (5 or 10x10{sup 9} years) after star formation. The star formation rate, which underlies all our observer models, depends sensitively on the three scanning parameters. We employ a recently developed model of star formation in themore » multiverse, a considerable refinement over previous treatments of the astrophysical and cosmological properties of different pocket universes. For each combination of observer model and measure, we display all single and bivariate probability distributions, both with the remaining parameter(s) held fixed and marginalized. Our results depend only weakly on the observer model but more strongly on the measure. Using the causal diamond measure, the observed parameter values (or bounds) lie within the central 2{sigma} of nearly all probability distributions we compute, and always within 3{sigma}. This success is encouraging and rather nontrivial, considering the large size and dimension of the parameter space. The causal patch measure gives similar results as long as curvature is negligible. If curvature dominates, the causal patch leads to a novel runaway: it prefers a negative value of the cosmological constant, with the smallest magnitude available in the landscape.« less

  17. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  18. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  19. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  20. Black swans, power laws, and dragon-kings: Earthquakes, volcanic eruptions, landslides, wildfires, floods, and SOC models

    NASA Astrophysics Data System (ADS)

    Sachs, M. K.; Yoder, M. R.; Turcotte, D. L.; Rundle, J. B.; Malamud, B. D.

    2012-05-01

    Extreme events that change global society have been characterized as black swans. The frequency-size distributions of many natural phenomena are often well approximated by power-law (fractal) distributions. An important question is whether the probability of extreme events can be estimated by extrapolating the power-law distributions. Events that exceed these extrapolations have been characterized as dragon-kings. In this paper we consider extreme events for earthquakes, volcanic eruptions, wildfires, landslides and floods. We also consider the extreme event behavior of three models that exhibit self-organized criticality (SOC): the slider-block, forest-fire, and sand-pile models. Since extrapolations using power-laws are widely used in probabilistic hazard assessment, the occurrence of dragon-king events have important practical implications.

  1. Almost conserved operators in nearly many-body localized systems

    NASA Astrophysics Data System (ADS)

    Pancotti, Nicola; Knap, Michael; Huse, David A.; Cirac, J. Ignacio; Bañuls, Mari Carmen

    2018-03-01

    We construct almost conserved local operators, that possess a minimal commutator with the Hamiltonian of the system, near the many-body localization transition of a one-dimensional disordered spin chain. We collect statistics of these slow operators for different support sizes and disorder strengths, both using exact diagonalization and tensor networks. Our results show that the scaling of the average of the smallest commutators with the support size is sensitive to Griffiths effects in the thermal phase and the onset of many-body localization. Furthermore, we demonstrate that the probability distributions of the commutators can be analyzed using extreme value theory and that their tails reveal the difference between diffusive and subdiffusive dynamics in the thermal phase.

  2. Methodology for assessment of undiscovered oil and gas resources for the 2008 Circum-Arctic Resource Appraisal

    USGS Publications Warehouse

    Charpentier, Ronald R.; Moore, Thomas E.; Gautier, D.L.

    2017-11-15

    The methodological procedures used in the geologic assessments of the 2008 Circum-Arctic Resource Appraisal (CARA) were based largely on the methodology developed for the 2000 U.S. Geological Survey World Petroleum Assessment. The main variables were probability distributions for numbers and sizes of undiscovered accumulations with an associated risk of occurrence. The CARA methodology expanded on the previous methodology in providing additional tools and procedures more applicable to the many Arctic basins that have little or no exploration history. Most importantly, geologic analogs from a database constructed for this study were used in many of the assessments to constrain numbers and sizes of undiscovered oil and gas accumulations.

  3. Opalescent and cloudy fruit juices: formation and particle stability.

    PubMed

    Beveridge, Tom

    2002-07-01

    Cloudy fruit juices, particularly from tropical fruit, are becoming a fast-growing part of the fruit juice sector. The classification of cloud as coarse and fine clouds by centrifugation and composition of cloud from apple, pineapple, orange, guava, and lemon juice are described. Fine particulate is shown to be the true stable cloud and to contain considerable protein, carbohydrate, and lipid components. Often, tannin is present as well. The fine cloud probably arises from cell membranes and appears not to be simply cell debris. Factors relating to the stability of fruit juice cloud, including particle sizes, size distribution, and density, are described and discussed. Factors promoting stable cloud in juice are presented.

  4. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  5. The microscopic basis for strain localisation in porous media

    NASA Astrophysics Data System (ADS)

    Main, Ian; Kun, Ferenz; Pal, Gergo; Janosi, Zoltan

    2017-04-01

    The spontaneous emergence of localized cooperative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uni-axial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of preexisting heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.

  6. Nucleation, growth and localisation of microcracks: implications for predictability of rock failure

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Kun, F.; Pál, G.; Jánosi, Z.

    2016-12-01

    The spontaneous emergence of localized co-operative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uniaxial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of pre-existing heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.

  7. Distribution pattern and number of ticks on lizards.

    PubMed

    Dudek, Krzysztof; Skórka, Piotr; Sajkowska, Zofia Anna; Ekner-Grzyb, Anna; Dudek, Monika; Tryjanowski, Piotr

    2016-02-01

    The success of ectoparasites depends primarily on the site of attachment and body condition of their hosts. Ticks usually tend to aggregate on vertebrate hosts in specific areas, but the distribution pattern may depend on host body size and condition, sex, life stage or skin morphology. Here, we studied the distribution of ticks on lizards and tested the following hypothesis: occurrence or high abundance of ticks is confined with body parts with smaller scales and larger interscalar length because such sites should provide ticks with superior attachment conditions. This study was performed in field conditions in central Poland in 2008-2011. In total, 500 lizards (Lacerta agilis) were caught and 839 ticks (Ixodes ricinus, larvae and nymphs) were collected from them. Using generalised linear mixed models, we found that the ticks were most abundant on forelimbs and their axillae, with 90% of ticks attached there. This part of the lizard body and the region behind the hindlimb were covered by the smallest scales with relatively wide gaps between them. This does not fully support our hypothesis that ticks prefer locations with easy access to skin between scales, because it does not explain why so few ticks were in the hindlimb area. We found that the abundance of ticks was positively correlated with lizard body size index (snout-vent length). Tick abundance was also higher in male and mature lizards than in female and young individuals. Autotomy had no effect on tick abundance. We found no correlation between tick size and lizard morphology, sex, autotomy and body size index. The probability of occurrence of dead ticks was positively linked with the total number of ticks on the lizard but there was no relationship between dead tick presence and lizard size, sex or age. Thus lizard body size and sex are the major factors affecting the abundance of ticks, and these parasites are distributed nearly exclusively on the host's forelimbs and their axillae. Copyright © 2015 Elsevier GmbH. All rights reserved.

  8. PSE-HMM: genome-wide CNV detection from NGS data using an HMM with Position-Specific Emission probabilities.

    PubMed

    Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi

    2016-11-03

    Copy Number Variation (CNV) is envisaged to be a major source of large structural variations in the human genome. In recent years, many studies apply Next Generation Sequencing (NGS) data for the CNV detection. However, still there is a necessity to invent more accurate computational tools. In this study, mate pair NGS data are used for the CNV detection in a Hidden Markov Model (HMM). The proposed HMM has position specific emission probabilities, i.e. a Gaussian mixture distribution. Each component in the Gaussian mixture distribution captures a different type of aberration that is observed in the mate pairs, after being mapped to the reference genome. These aberrations may include any increase (decrease) in the insertion size or change in the direction of mate pairs that are mapped to the reference genome. This HMM with Position-Specific Emission probabilities (PSE-HMM) is utilized for the genome-wide detection of deletions and tandem duplications. The performance of PSE-HMM is evaluated on a simulated dataset and also on a real data of a Yoruban HapMap individual, NA18507. PSE-HMM is effective in taking observation dependencies into account and reaches a high accuracy in detecting genome-wide CNVs. MATLAB programs are available at http://bs.ipm.ir/softwares/PSE-HMM/ .

  9. Precipitation Cluster Distributions: Current Climate Storm Statistics and Projected Changes Under Global Warming

    NASA Astrophysics Data System (ADS)

    Quinn, Kevin Martin

    The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.

  10. Phase transition of social learning collectives and the echo chamber.

    PubMed

    Mori, Shintaro; Nakayama, Kazuaki; Hisakado, Masato

    2016-11-01

    We study a simple model for social learning agents in a restless multiarmed bandit. There are N agents, and the bandit has M good arms that change to bad with the probability q_{c}/N. If the agents do not know a good arm, they look for it by a random search (with the success probability q_{I}) or copy the information of other agents' good arms (with the success probability q_{O}) with probabilities 1-p or p, respectively. The distribution of the agents in M good arms obeys the Yule distribution with the power-law exponent 1+γ in the limit N,M→∞, and γ=1+(1-p)q_{I}/pq_{O}. The system shows a phase transition at p_{c}=q_{I}/q_{I}+q_{o}. For pp_{c}), the variance of N_{1} per agent is finite (diverges as ∝N^{2-γ} with N). There is a threshold value N_{s} for the system size that scales as lnN_{s}∝1/(γ-1). The expected value of the number of the agents with a good arm N_{1} increases with p for N>N_{s}. For p>p_{c} and N

  11. Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations

    NASA Astrophysics Data System (ADS)

    Charalambous, C.; Pike, W. T.; Golombek, M.

    2017-12-01

    We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides within the transported fraction on Mars can be estimated. The parameters of the model thus allow for a better understanding of the regolith's history which has implications to the origin of sand on Mars. [1] Charalambous, PhD thesis, ICL, 2015 [2] Golombek et al., Space Science Reviews, 2016 [3] Kok et al., ROPP, 2012 [4] McGlynn et al., JGR, 2011 [5] Pike et al., GRL, 2011

  12. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  13. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  14. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  15. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  16. The Interannual Stability of Cumulative Frequency Distributions for Convective System Size and Intensity

    NASA Technical Reports Server (NTRS)

    Mohr, Karen I.; Molinari, John; Thorncroft, Chris D,

    2010-01-01

    The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from TRMM data as a cluster of pixels with an 85 GHz polarization-corrected brightness temperature below 255 K and with an area at least 64 km 2. The study database consisted of convective systems in West Africa from May Sep for 1998-2007 and in the western Pacific from May Nov 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences among the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Sub-setting the database revealed some sensitivity in distribution shape to the size of the sampling area, length of sample period, and climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is wetter or drier than normal.

  17. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  18. Dimension-dependent stimulated radiative interaction of a single electron quantum wavepacket

    NASA Astrophysics Data System (ADS)

    Gover, Avraham; Pan, Yiming

    2018-06-01

    In the foundation of quantum mechanics, the spatial dimensions of electron wavepacket are understood only in terms of an expectation value - the probability distribution of the particle location. One can still inquire how the quantum electron wavepacket size affects a physical process. Here we address the fundamental physics problem of particle-wave duality and the measurability of a free electron quantum wavepacket. Our analysis of stimulated radiative interaction of an electron wavepacket, accompanied by numerical computations, reveals two limits. In the quantum regime of long wavepacket size relative to radiation wavelength, one obtains only quantum-recoil multiphoton sidebands in the electron energy spectrum. In the opposite regime, the wavepacket interaction approaches the limit of classical point-particle acceleration. The wavepacket features can be revealed in experiments carried out in the intermediate regime of wavepacket size commensurate with the radiation wavelength.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jager, Yetta; Efroymson, Rebecca Ann; Sublette, K.

    Quantitative tools are needed to evaluate the ecological effects of increasing petroleum production. In this article, we describe two stochastic models for simulating the spatial distribution of brine spills on a landscape. One model uses general assumptions about the spatial arrangement of spills and their sizes; the second model distributes spills by siting rectangular well complexes and conditioning spill probabilities on the configuration of pipes. We present maps of landscapes with spills produced by the two methods and compare the ability of the models to reproduce a specified spill area. A strength of the models presented here is their abilitymore » to extrapolate from the existing landscape to simulate landscapes with a higher (or lower) density of oil wells.« less

  20. Transport dynamics -- one particle at a time

    NASA Astrophysics Data System (ADS)

    Granick, Steve

    2010-03-01

    By watching particles and molecules diffuse, one-by-one, the full displacement probability distribution can be measured, enabling one to see experimentally how, how fast, and with what fidelity to classical assumptions, particles and molecules diffuse through complex environments. This allows us to measuring the confining tube potential through which thin actin filaments reptate, and also some of the amazing differences in diffusion rate between colloidal particles and phospholipid vesicles of the same size. Pervasively, we find that Brownian diffusion can be non-Gaussian.

  1. Extremes and bursts in complex multi-scale plasmas

    NASA Astrophysics Data System (ADS)

    Watkins, N. W.; Chapman, S. C.; Hnat, B.

    2012-04-01

    Quantifying the spectrum of sizes and durations of large and/or long-lived fluctuations in complex, multi-scale, space plasmas is a topic of both theoretical and practical importance. The predictions of inherently multi-scale physical theories such as MHD turbulence have given one direct stimulus for its investigation. There are also space weather implications to an improved ability to assess the likelihood of an extreme fluctuation of a given size. Our intuition as scientists tends to be formed on the familiar Gaussian "normal" distribution, which has a very low likelihood of extreme fluctuations. Perhaps surprisingly, there is both theoretical and observational evidence that favours non-Gaussian, heavier-tailed, probability distributions for some space physics datasets. Additionally there is evidence for the existence of long-ranged memory between the values of fluctuations. In this talk I will show how such properties can be captured in a preliminary way by a self-similar, fractal model. I will show how such a fractal model can be used to make predictions for experimental accessible quantities like the size and duration of a buurst (a sequence of values that exceed a given threshold), or the survival probability of a burst [c.f. preliminary results in Watkins et al, PRE, 2009]. In real-world time series scaling behaviour need not be "mild" enough to be captured by a single self-similarity exponent H, but might instead require a "wild" multifractal spectrum of scaling exponents [e.g. Rypdal and Rypdal, JGR, 2011; Moloney and Davidsen, JGR, 2011] to give a complete description. I will discuss preliminary work on extending the burst approach into the multifractal domain [see also Watkins et al, chapter in press for AGU Chapman Conference on Complexity and Extreme Events in the Geosciences, Hyderabad].

  2. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity. PMID:26010779

  3. Universal spatial correlation functions for describing and reconstructing soil microstructure.

    PubMed

    Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity.

  4. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  5. Size relationships between the parasitic copepod, Lernanthropus cynoscicola , and its fish host, Cynoscion guatucupa.

    PubMed

    Timi, J T; Lanfranchi, A L

    2006-02-01

    The effects of the size of Cynoscion guatucupa on the size and demographic parameters of their parasitic copepod Lernanthropus cynoscicola were evaluated. Prevalence of copepods increased with host size up to fish of intermediate length, then it decreased, probably because changes in size of gill filaments affect their attachment capability, enhancing the possibility of being detached by respiratory currents. Body length of copepods was significantly correlated with host length, indicating that only parasites of an 'adequate' size can be securely attached to a fish of a given size. The absence of relationship between the coefficient of variability in copepod length and both host length and number of conspecifics, together with the host-size dependence of both male and juvenile female sizes, prevent to interpret this relationship as a phenomenon of developmental plasticity. Therefore, the observed peak of prevalence could reflect the distribution of size frequencies in the population of copepods, with more individuals near the average length. Concluding, the 'optimum' host size for L. cynoscicola could merely be the adequate size for most individuals in the population, depending, therefore, on a populational attribute of parasites. However, its location along the host size range could be determined by a balance between fecundity and number of available hosts, which increases and decreases, respectively, with both host and parasite size.

  6. Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks

    PubMed Central

    Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim

    2015-01-01

    The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network. PMID:26606143

  7. Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks.

    PubMed

    Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim

    2015-01-01

    The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network.

  8. On the Number of Non-equivalent Ancestral Configurations for Matching Gene Trees and Species Trees.

    PubMed

    Disanto, Filippo; Rosenberg, Noah A

    2017-09-14

    An ancestral configuration is one of the combinatorially distinct sets of gene lineages that, for a given gene tree, can reach a given node of a specified species tree. Ancestral configurations have appeared in recursive algebraic computations of the conditional probability that a gene tree topology is produced under the multispecies coalescent model for a given species tree. For matching gene trees and species trees, we study the number of ancestral configurations, considered up to an equivalence relation introduced by Wu (Evolution 66:763-775, 2012) to reduce the complexity of the recursive probability computation. We examine the largest number of non-equivalent ancestral configurations possible for a given tree size n. Whereas the smallest number of non-equivalent ancestral configurations increases polynomially with n, we show that the largest number increases with [Formula: see text], where k is a constant that satisfies [Formula: see text]. Under a uniform distribution on the set of binary labeled trees with a given size n, the mean number of non-equivalent ancestral configurations grows exponentially with n. The results refine an earlier analysis of the number of ancestral configurations considered without applying the equivalence relation, showing that use of the equivalence relation does not alter the exponential nature of the increase with tree size.

  9. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  10. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  11. Patterns of diversity in soft-bodied meiofauna: dispersal ability and body size matter.

    PubMed

    Curini-Galletti, Marco; Artois, Tom; Delogu, Valentina; De Smet, Willem H; Fontaneto, Diego; Jondelius, Ulf; Leasi, Francesca; Martínez, Alejandro; Meyer-Wachsmuth, Inga; Nilsson, Karin Sara; Tongiorgi, Paolo; Worsaae, Katrine; Todaro, M Antonio

    2012-01-01

    Biogeographical and macroecological principles are derived from patterns of distribution in large organisms, whereas microscopic ones have often been considered uninteresting, because of their supposed wide distribution. Here, after reporting the results of an intensive faunistic survey of marine microscopic animals (meiofauna) in Northern Sardinia, we test for the effect of body size, dispersal ability, and habitat features on the patterns of distribution of several groups. As a dataset we use the results of a workshop held at La Maddalena (Sardinia, Italy) in September 2010, aimed at studying selected taxa of soft-bodied meiofauna (Acoela, Annelida, Gastrotricha, Nemertodermatida, Platyhelminthes and Rotifera), in conjunction with data on the same taxa obtained during a previous workshop hosted at Tjärnö (Western Sweden) in September 2007. Using linear mixed effects models and model averaging while accounting for sampling bias and potential pseudoreplication, we found evidence that: (1) meiofaunal groups with more restricted distribution are the ones with low dispersal potential; (2) meiofaunal groups with higher probability of finding new species for science are the ones with low dispersal potential; (3) the proportion of the global species pool of each meiofaunal group present in each area at the regional scale is negatively related to body size, and positively related to their occurrence in the endobenthic habitat. Our macroecological analysis of meiofauna, in the framework of the ubiquity hypothesis for microscopic organisms, indicates that not only body size but mostly dispersal ability and also occurrence in the endobenthic habitat are important correlates of diversity for these understudied animals, with different importance at different spatial scales. Furthermore, since the Western Mediterranean is one of the best-studied areas in the world, the large number of undescribed species (37%) highlights that the census of marine meiofauna is still very far from being complete.

  12. Laser beam propagation in atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Murty, S. S. R.

    1979-01-01

    The optical effects of atmospheric turbulence on the propagation of low power laser beams are reviewed in this paper. The optical effects are produced by the temperature fluctuations which result in fluctuations of the refractive index of air. The commonly-used models of index-of-refraction fluctuations are presented. Laser beams experience fluctuations of beam size, beam position, and intensity distribution within the beam due to refractive turbulence. Some of the observed effects are qualitatively explained by treating the turbulent atmosphere as a collection of moving gaseous lenses of various sizes. Analytical results and experimental verifications of the variance, covariance and probability distribution of intensity fluctuations in weak turbulence are presented. For stronger turbulence, a saturation of the optical scintillations is observed. The saturation of scintillations involves a progressive break-up of the beam into multiple patches; the beam loses some of its lateral coherence. Heterodyne systems operating in a turbulent atmosphere experience a loss of heterodyne signal due to the destruction of coherence.

  13. Scaling properties of a rice-pile model: inertia and friction effects.

    PubMed

    Khfifi, M; Loulidi, M

    2008-11-01

    We present a rice-pile cellular automaton model that includes inertial and friction effects. This model is studied in one dimension, where the updating of metastable sites is done according to a stochastic dynamics governed by a probabilistic toppling parameter p that depends on the accumulated energy of moving grains. We investigate the scaling properties of the model using finite-size scaling analysis. The avalanche size, the lifetime, and the residence time distributions exhibit a power-law behavior. Their corresponding critical exponents, respectively, tau, y, and yr, are not universal. They present continuous variation versus the parameters of the system. The maximal value of the critical exponent tau that our model gives is very close to the experimental one, tau=2.02 [Frette, Nature (London) 379, 49 (1996)], and the probability distribution of the residence time is in good agreement with the experimental results. We note that the critical behavior is observed only in a certain range of parameter values of the system which correspond to low inertia and high friction.

  14. Boulders on asteroid Toutatis as observed by Chang’e-2

    PubMed Central

    Jiang, Yun; Ji, Jianghui; Huang, Jiangchuan; Marchi, Simone; Li, Yuan; Ip, Wing-Huen

    2015-01-01

    Boulders are ubiquitously found on the surfaces of small rocky bodies in the inner solar system and their spatial and size distributions give insight into the geological evolution and collisional history of the parent bodies. Using images acquired by the Chang’e-2 spacecraft, more than 200 boulders have been identified over the imaged area of the near-Earth asteroid Toutatis. The cumulative boulder size frequency distribution (SFD) shows a steep slope of −4.4 ± 0.1, which is indicative of a high degree of fragmentation. Similar to Itokawa, Toutatis probably has a rubble-pile structure, as most boulders on its surface cannot solely be explained by impact cratering. The significantly steeper slope for Toutatis’ boulder SFD compared to Itokawa may imply a different preservation state or diverse formation scenarios. In addition, the cumulative crater SFD has been used to estimate a surface crater retention age of approximately 1.6 ± 0.3 Gyr. PMID:26522880

  15. A stochastic model for the probability of malaria extinction by mass drug administration.

    PubMed

    Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A

    2017-09-18

    Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c  < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.

  16. Assessment of local variability by high-throughput e-beam metrology for prediction of patterning defect probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Fuming; Hunsche, Stefan; Anunciado, Roy; Corradi, Antonio; Tien, Hung Yu; Tang, Peng; Wei, Junwei; Wang, Yongjun; Fang, Wei; Wong, Patrick; van Oosten, Anton; van Ingen Schenau, Koen; Slachter, Bram

    2018-03-01

    We present an experimental study of pattern variability and defectivity, based on a large data set with more than 112 million SEM measurements from an HMI high-throughput e-beam tool. The test case is a 10nm node SRAM via array patterned with a DUV immersion LELE process, where we see a variation in mean size and litho sensitivities between different unique via patterns that leads to a seemingly qualitative differences in defectivity. The large available data volume enables further analysis to reliably distinguish global and local CDU variations, including a breakdown into local systematics and stochastics. A closer inspection of the tail end of the distributions and estimation of defect probabilities concludes that there is a common defect mechanism and defect threshold despite the observed differences of specific pattern characteristics. We expect that the analysis methodology can be applied for defect probability modeling as well as general process qualification in the future.

  17. Theory of hydrophobicity: transient cavities in molecular liquids

    NASA Technical Reports Server (NTRS)

    Pratt, L. R.; Pohorille, A.

    1992-01-01

    Observation of the size distribution of transient cavities in computer simulations of water, n-hexane, and n-dodecane under benchtop conditions shows that the sizes of cavities are more sharply defined in liquid water but the most-probable-size cavities are about the same size in each of these liquids. The calculated solvent atomic density in contact with these cavities shows that water applies more force per unit area of cavity surface than do the hydrocarbon liquids. This contact density, or "squeezing" force, reaches a maximum near cavity diameters of 2.4 angstroms. The results for liquid water are compared to the predictions of simple theories and, in addition, to results for a reference simple liquid. The numerical data for water at a range of temperatures are analyzed to extract a surface free energy contribution to the work of formation of atomic-size cavities. Comparison with the liquid-vapor interfacial tensions of the model liquids studied here indicates that the surface free energies extracted for atomic-size cavities cannot be accurately identified with the macroscopic surface tensions of the systems.

  18. Theory of hydrophobicity: Transient cavities in molecular liquids

    PubMed Central

    Pratt, Lawrence R.; Pohorille, Andrew

    1992-01-01

    Observation of the size distribution of transient cavities in computer simulations of water, n-hexane, and n-dodecane under benchtop conditions shows that the sizes of cavities are more sharply defined in liquid water but the most-probable-size cavities are about the same size in each of these liquids. The calculated solvent atomic density in contact with these cavities shows that water applies more force per unit area of cavity surface than do the hydrocarbon liquids. This contact density, or “squeezing” force, reaches a maximum near cavity diameters of 2.4 Å. The results for liquid water are compared to the predictions of simple theories and, in addition, to results for a reference simple liquid. The numerical data for water at a range of temperatures are analyzed to extract a surface free energy contribution to the work of formation of atomic-size cavities. Comparison with the liquid-vapor interfacial tensions of the model liquids studies here indicates that the surface free energies extracted for atomic-size cavities cannot be accurately identified with the macroscopic surface tensions of the systems. PMID:11537863

  19. An effective inversion algorithm for retrieving bimodal aerosol particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2014-12-01

    The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.

  20. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  1. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  2. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  3. Calling patterns in human communication dynamics

    PubMed Central

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2013-01-01

    Modern technologies not only provide a variety of communication modes (e.g., texting, cell phone conversation, and online instant messaging), but also detailed electronic traces of these communications between individuals. These electronic traces indicate that the interactions occur in temporal bursts. Here, we study intercall duration of communications of the 100,000 most active cell phone users of a Chinese mobile phone operator. We confirm that the intercall durations follow a power-law distribution with an exponential cutoff at the population level but find differences when focusing on individual users. We apply statistical tests at the individual level and find that the intercall durations follow a power-law distribution for only 3,460 individuals (3.46%). The intercall durations for the majority (73.34%) follow a Weibull distribution. We quantify individual users using three measures: out-degree, percentage of outgoing calls, and communication diversity. We find that the cell phone users with a power-law duration distribution fall into three anomalous clusters: robot-based callers, telecom fraud, and telephone sales. This information is of interest to both academics and practitioners, mobile telecom operators in particular. In contrast, the individual users with a Weibull duration distribution form the fourth cluster of ordinary cell phone users. We also discover more information about the calling patterns of these four clusters (e.g., the probability that a user will call the cr-th most contact and the probability distribution of burst sizes). Our findings may enable a more detailed analysis of the huge body of data contained in the logs of massive users. PMID:23319645

  4. Calling patterns in human communication dynamics.

    PubMed

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene

    2013-01-29

    Modern technologies not only provide a variety of communication modes (e.g., texting, cell phone conversation, and online instant messaging), but also detailed electronic traces of these communications between individuals. These electronic traces indicate that the interactions occur in temporal bursts. Here, we study intercall duration of communications of the 100,000 most active cell phone users of a Chinese mobile phone operator. We confirm that the intercall durations follow a power-law distribution with an exponential cutoff at the population level but find differences when focusing on individual users. We apply statistical tests at the individual level and find that the intercall durations follow a power-law distribution for only 3,460 individuals (3.46%). The intercall durations for the majority (73.34%) follow a Weibull distribution. We quantify individual users using three measures: out-degree, percentage of outgoing calls, and communication diversity. We find that the cell phone users with a power-law duration distribution fall into three anomalous clusters: robot-based callers, telecom fraud, and telephone sales. This information is of interest to both academics and practitioners, mobile telecom operators in particular. In contrast, the individual users with a Weibull duration distribution form the fourth cluster of ordinary cell phone users. We also discover more information about the calling patterns of these four clusters (e.g., the probability that a user will call the c(r)-th most contact and the probability distribution of burst sizes). Our findings may enable a more detailed analysis of the huge body of data contained in the logs of massive users.

  5. Modeling intersubject variability of bronchial doses for inhaled radon progeny.

    PubMed

    Hofmann, Werner; Winkler-Heil, Renate; Hussain, Majid

    2010-10-01

    The main sources of intersubject variations considered in the present study were: (1) size and structure of nasal and oral passages, affecting extrathoracic deposition and, in further consequence, the fraction of the inhaled activity reaching the bronchial region; (2) size and asymmetric branching of the human bronchial airway system, leading to variations of diameters, lengths, branching angles, etc.; (3) respiratory parameters, such as tidal volume, and breathing frequency; (4) mucociliary clearance rates; and (5) thickness of the bronchial epithelium and depth of target cells, related to airway diameters. For the calculation of deposition fractions, retained surface activities, and bronchial doses, parameter values were randomly selected from their corresponding probability density functions, derived from experimental data, by applying Monte Carlo methods. Bronchial doses, expressed in mGy WLM-1, were computed for specific mining conditions, i.e., for defined size distributions, unattached fractions, and physical activities. Resulting bronchial dose distributions could be approximated by lognormal distributions. Geometric standard deviations illustrating intersubject variations ranged from about 2 in the trachea to about 7 in peripheral bronchiolar airways. The major sources of the intersubject variability of bronchial doses for inhaled radon progeny are the asymmetry and variability of the linear airway dimensions, the filtering efficiency of the nasal passages, and the thickness of the bronchial epithelium, while fluctuations of the respiratory parameters and mucociliary clearance rates seem to compensate each other.

  6. Simulating statistics of lightning-induced and man made fires

    NASA Astrophysics Data System (ADS)

    Krenn, R.; Hergarten, S.

    2009-04-01

    The frequency-area distributions of forest fires show power-law behavior with scaling exponents α in a quite narrow range, relating wildfire research to the theoretical framework of self-organized criticality. Examples of self-organized critical behavior can be found in computer simulations of simple cellular automata. The established self-organized critical Drossel-Schwabl forest fire model (DS-FFM) is one of the most widespread models in this context. Despite its qualitative agreement with event-size statistics from nature, its applicability is still questioned. Apart from general concerns that the DS-FFM apparently oversimplifies the complex nature of forest dynamics, it significantly overestimates the frequency of large fires. We present a straightforward modification of the model rules that increases the scaling exponent α by approximately 1•3 and brings the simulated event-size statistics close to those observed in nature. In addition, combined simulations of both the original and the modified model predict a dependence of the overall distribution on the ratio of lightning induced and man made fires as well as a difference between their respective event-size statistics. The increase of the scaling exponent with decreasing lightning probability as well as the splitting of the partial distributions are confirmed by the analysis of the Canadian Large Fire Database. As a consequence, lightning induced and man made forest fires cannot be treated separately in wildfire modeling, hazard assessment and forest management.

  7. Tree cover at fine and coarse spatial grains interacts with shade tolerance to shape plant species distributions across the Alps

    PubMed Central

    Nieto-Lugilde, Diego; Lenoir, Jonathan; Abdulhak, Sylvain; Aeschimann, David; Dullinger, Stefan; Gégout, Jean-Claude; Guisan, Antoine; Pauli, Harald; Renaud, Julien; Theurillat, Jean-Paul; Thuiller, Wilfried; Van Es, Jérémie; Vittoz, Pascal; Willner, Wolfgang; Wohlgemuth, Thomas; Zimmermann, Niklaus E.; Svenning, Jens-Christian

    2015-01-01

    The role of competition for light among plants has long been recognised at local scales, but its importance for plant species distributions at larger spatial scales has generally been ignored. Tree cover modifies the local abiotic conditions below the canopy, notably by reducing light availability, and thus, also the performance of species that are not adapted to low-light conditions. However, this local effect may propagate to coarser spatial grains, by affecting colonisation probabilities and local extinction risks of herbs and shrubs. To assess the effect of tree cover at both the plot- and landscape-grain sizes (approximately 10-m and 1-km), we fit Generalised Linear Models (GLMs) for the plot-level distributions of 960 species of herbs and shrubs using 6,935 vegetation plots across the European Alps. We ran four models with different combinations of variables (climate, soil and tree cover) at both spatial grains for each species. We used partial regressions to evaluate the independent effects of plot- and landscape-grain tree cover on plot-level plant communities. Finally, the effects on species-specific elevational range limits were assessed by simulating a removal experiment comparing the species distributions under high and low tree cover. Accounting for tree cover improved the model performance, with the probability of the presence of shade-tolerant species increasing with increasing tree cover, whereas shade-intolerant species showed the opposite pattern. The tree cover effect occurred consistently at both the plot and landscape spatial grains, albeit most strongly at the former. Importantly, tree cover at the two grain sizes had partially independent effects on plot-level plant communities. With high tree cover, shade-intolerant species exhibited narrower elevational ranges than with low tree cover whereas shade-tolerant species showed wider elevational ranges at both limits. These findings suggest that forecasts of climate-related range shifts for herb and shrub species may be modified by tree cover dynamics. PMID:26290621

  8. The Structure of the Young Star Cluster NGC 6231. II. Structure, Formation, and Fate

    NASA Astrophysics Data System (ADS)

    Kuhn, Michael A.; Getman, Konstantin V.; Feigelson, Eric D.; Sills, Alison; Gromadzki, Mariusz; Medina, Nicolás; Borissova, Jordanka; Kurtev, Radostin

    2017-12-01

    The young cluster NGC 6231 (stellar ages ˜2-7 Myr) is observed shortly after star formation activity has ceased. Using the catalog of 2148 probable cluster members obtained from Chandra, VVV, and optical surveys (Paper I), we examine the cluster’s spatial structure and dynamical state. The spatial distribution of stars is remarkably well fit by an isothermal sphere with moderate elongation, while other commonly used models like Plummer spheres, multivariate normal distributions, or power-law models are poor fits. The cluster has a core radius of 1.2 ± 0.1 pc and a central density of ˜200 stars pc-3. The distribution of stars is mildly mass segregated. However, there is no radial stratification of the stars by age. Although most of the stars belong to a single cluster, a small subcluster of stars is found superimposed on the main cluster, and there are clumpy non-isotropic distributions of stars outside ˜4 core radii. When the size, mass, and age of NGC 6231 are compared to other young star clusters and subclusters in nearby active star-forming regions, it lies at the high-mass end of the distribution but along the same trend line. This could result from similar formation processes, possibly hierarchical cluster assembly. We argue that NGC 6231 has expanded from its initial size but that it remains gravitationally bound.

  9. Application of wildfire simulation methods to assess wildfire exposure in a Mediterranean fire-prone area (Sardinia, Italy)

    NASA Astrophysics Data System (ADS)

    Salis, M.; Ager, A.; Arca, B.; Finney, M.; Bacciu, V. M.; Spano, D.; Duce, P.

    2012-12-01

    Spatial and temporal patterns of fire spread and behavior are dependent on interactions among climate, topography, vegetation and fire suppression efforts (Pyne et al. 1996; Viegas 2006; Falk et al. 2007). Humans also play a key role in determining frequency and spatial distribution of ignitions (Bar Massada et al, 2011), and thus influence fire regimes as well. The growing incidence of catastrophic wildfires has led to substantial losses for important ecological and human values within many areas of the Mediterranean basin (Moreno et al. 1998; Mouillot et al. 2005; Viegas et al. 2006a; Riaño et al. 2007). The growing fire risk issue has led to many new programs and policies of fuel management and risk mitigation by environmental and fire agencies. However, risk-based methodologies to help identify areas characterized by high potential losses and prioritize fuel management have been lacking for the region. Formal risk assessment requires the joint consideration of likelihood, intensity, and susceptibility, the product of which estimates the chance of a specific loss (Brillinger 2003; Society of Risk Analysis, 2006). Quantifying fire risk therefore requires estimates of a) the probability of a specific location burning at a specific intensity and location, and b) the resulting change in financial or ecological value (Finney 2005; Scott 2006). When large fires are the primary cause of damage, the application of this risk formulation requires modeling fire spread to capture landscape properties that affect burn probability. Recently, the incorporation of large fire spread into risk assessment systems has become feasible with the development of high performance fire simulation systems (Finney et al. 2011) that permit the simulation of hundreds of thousands of fires to generate fine scale maps of burn probability, flame length, and fire size, while considering the combined effects of weather, fuels, and topography (Finney 2002; Andrews et al. 2007; Ager and Finney 2009; Finney et al. 2009; Salis et al. 2012 accepted). In this work, we employed wildfire simulation methods to quantify wildfire exposure to human and ecological values for the island of Sardinia, Italy. The work was focused on the risk and exposure posed by large fires (e.g. 100 - 10,000 ha), and considers historical weather, ignition patterns and fuels. We simulated 100,000 fires using burn periods that replicated the historical size distribution on the Island, and an ignition probability grid derived from historic ignition data. We then examine spatial variation in three exposure components (burn probability, flame length, fire size) among important human and ecological values. The results allowed us to contract exposure among and within the various features examined, and highlighted the importance of human factors in shaping wildfire exposure in Sardinia. The work represents the first application of burn probability modeling in the Mediterranean region, and sets the stage for expanded work in the region to quantify risk from large fires

  10. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  11. Dynamical origins of the community structure of an online multi-layer society

    NASA Astrophysics Data System (ADS)

    Klimek, Peter; Diakonova, Marina; Eguíluz, Víctor M.; San Miguel, Maxi; Thurner, Stefan

    2016-08-01

    Social structures emerge as a result of individuals managing a variety of different social relationships. Societies can be represented as highly structured dynamic multiplex networks. Here we study the dynamical origins of the specific community structures of a large-scale social multiplex network of a human society that interacts in a virtual world of a massive multiplayer online game. There we find substantial differences in the community structures of different social actions, represented by the various layers in the multiplex network. Community sizes distributions are either fat-tailed or appear to be centered around a size of 50 individuals. To understand these observations we propose a voter model that is built around the principle of triadic closure. It explicitly models the co-evolution of node- and link-dynamics across different layers of the multiplex network. Depending on link and node fluctuation probabilities, the model exhibits an anomalous shattered fragmentation transition, where one layer fragments from one large component into many small components. The observed community size distributions are in good agreement with the predicted fragmentation in the model. This suggests that several detailed features of the fragmentation in societies can be traced back to the triadic closure processes.

  12. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process.

    PubMed

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-27

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  13. Extinction risk is most acute for the world’s largest and smallest vertebrates

    PubMed Central

    Ripple, William J.; Wolf, Christopher; Newsome, Thomas M.; Hoffmann, Michael; Wirsing, Aaron J.; McCauley, Douglas J.

    2017-01-01

    Extinction risk in vertebrates has been linked to large body size, but this putative relationship has only been explored for select taxa, with variable results. Using a newly assembled and taxonomically expansive database, we analyzed the relationships between extinction risk and body mass (27,647 species) and between extinction risk and range size (21,294 species) for vertebrates across six main classes. We found that the probability of being threatened was positively and significantly related to body mass for birds, cartilaginous fishes, and mammals. Bimodal relationships were evident for amphibians, reptiles, and bony fishes. Most importantly, a bimodal relationship was found across all vertebrates such that extinction risk changes around a body mass breakpoint of 0.035 kg, indicating that the lightest and heaviest vertebrates have elevated extinction risk. We also found range size to be an important predictor of the probability of being threatened, with strong negative relationships across nearly all taxa. A review of the drivers of extinction risk revealed that the heaviest vertebrates are most threatened by direct killing by humans. By contrast, the lightest vertebrates are most threatened by habitat loss and modification stemming especially from pollution, agricultural cropping, and logging. Our results offer insight into halting the ongoing wave of vertebrate extinctions by revealing the vulnerability of large and small taxa, and identifying size-specific threats. Moreover, they indicate that, without intervention, anthropogenic activities will soon precipitate a double truncation of the size distribution of the world’s vertebrates, fundamentally reordering the structure of life on our planet. PMID:28923917

  14. Extinction risk is most acute for the world's largest and smallest vertebrates.

    PubMed

    Ripple, William J; Wolf, Christopher; Newsome, Thomas M; Hoffmann, Michael; Wirsing, Aaron J; McCauley, Douglas J

    2017-10-03

    Extinction risk in vertebrates has been linked to large body size, but this putative relationship has only been explored for select taxa, with variable results. Using a newly assembled and taxonomically expansive database, we analyzed the relationships between extinction risk and body mass (27,647 species) and between extinction risk and range size (21,294 species) for vertebrates across six main classes. We found that the probability of being threatened was positively and significantly related to body mass for birds, cartilaginous fishes, and mammals. Bimodal relationships were evident for amphibians, reptiles, and bony fishes. Most importantly, a bimodal relationship was found across all vertebrates such that extinction risk changes around a body mass breakpoint of 0.035 kg, indicating that the lightest and heaviest vertebrates have elevated extinction risk. We also found range size to be an important predictor of the probability of being threatened, with strong negative relationships across nearly all taxa. A review of the drivers of extinction risk revealed that the heaviest vertebrates are most threatened by direct killing by humans. By contrast, the lightest vertebrates are most threatened by habitat loss and modification stemming especially from pollution, agricultural cropping, and logging. Our results offer insight into halting the ongoing wave of vertebrate extinctions by revealing the vulnerability of large and small taxa, and identifying size-specific threats. Moreover, they indicate that, without intervention, anthropogenic activities will soon precipitate a double truncation of the size distribution of the world's vertebrates, fundamentally reordering the structure of life on our planet.

  15. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2010-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish .

  16. Developing an Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  17. The effect of wind and eruption source parameter variations on tephra fallout hazard assessment: an example from Vesuvio (Italy)

    NASA Astrophysics Data System (ADS)

    Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto

    2015-04-01

    Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.

  18. A rapid local singularity analysis algorithm with applications

    NASA Astrophysics Data System (ADS)

    Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits

    2015-04-01

    The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.

  19. Range size heritability and diversification patterns in the liverwort genus Radula.

    PubMed

    Patiño, Jairo; Wang, Jian; Renner, Matt A M; Gradstein, S Robbert; Laenen, Benjamin; Devos, Nicolas; Shaw, A Jonathan; Vanderpoorten, Alain

    2017-01-01

    Why some species exhibit larger geographical ranges than others, and to what extent does variation in range size affect diversification rates, remains a fundamental, but largely unanswered question in ecology and evolution. Here, we implement phylogenetic comparative analyses and ancestral area estimations in Radula, a liverwort genus of Cretaceous origin, to investigate the mechanisms that explain differences in geographical range size and diversification rates among lineages. Range size was phylogenetically constrained in the two sub-genera characterized by their almost complete Australasian and Neotropical endemicity, respectively. The congruence between the divergence time of these lineages and continental split suggests that plate tectonics could have played a major role in their present distribution, suggesting that a strong imprint of vicariance can still be found in extant distribution patterns in these highly mobile organisms. Amentuloradula, Volutoradula and Metaradula species did not appear to exhibit losses of dispersal capacities in terms of dispersal life-history traits, but evidence for significant phylogenetic signal in macroecological niche traits suggests that niche conservatism accounts for their restricted geographic ranges. Despite their greatly restricted distribution to Australasia and Neotropics respectively, Amentuloradula and Volutoradula did not exhibit significantly lower diversification rates than more widespread lineages, in contrast with the hypothesis that the probability of speciation increases with range size by promoting geographic isolation and increasing the rate at which novel habitats are encountered. We suggest that stochastic long-distance dispersal events may balance allele frequencies across large spatial scales, leading to low genetic structure among geographically distant areas or even continents, ultimately decreasing the diversification rates in highly mobile, widespread lineages. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Density, aggregation, and body size of northern pikeminnow preying on juvenile salmonids in a large river

    USGS Publications Warehouse

    Petersen, J.H.

    2001-01-01

    Predation by northern pikeminnow Ptychocheilus oregonensis on juvenile salmonids Oncorhynchus spp. occurred probably during brief feeding bouts since diets were either dominated by salmonids (>80% by weight), or contained other prey types and few salmonids (<5%). In samples where salmonids had been consumed, large rather than small predators were more likely to have captured salmonids. Transects with higher catch-per-unit of effort of predators also had higher incidences of salmonids in predator guts. Predators in two of three reservoir areas were distributed more contagiously if they had preyed recently on salmonids. Spatial and temporal patchiness of salmonid prey may be generating differences in local density, aggregation, and body size of their predators in this large river.

  1. Combinatoric analysis of heterogeneous stochastic self-assembly.

    PubMed

    D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom

    2013-09-28

    We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.

  2. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  3. Parabolic features and the erosion rate on Venus

    NASA Technical Reports Server (NTRS)

    Strom, Robert G.

    1993-01-01

    The impact cratering record on Venus consists of 919 craters covering 98 percent of the surface. These craters are remarkably well preserved, and most show pristine structures including fresh ejecta blankets. Only 35 craters (3.8 percent) have had their ejecta blankets embayed by lava and most of these occur in the Atla-Beta Regio region; an area thought to be recently active. parabolic features are associated with 66 of the 919 craters. These craters range in size from 6 to 105 km diameter. The parabolic features are thought to be the result of the deposition of fine-grained ejecta by winds in the dense venusian atmosphere. The deposits cover about 9 percent of the surface and none appear to be embayed by younger volcanic materials. However, there appears to be a paucity of these deposits in the Atla-Beta Regio region, and this may be due to the more recent volcanism in this area of Venus. Since parabolic features are probably fine-grain, wind-deposited ejecta, then all impact craters on Venus probably had these deposits at some time in the past. The older deposits have probably been either eroded or buried by eolian processes. Therefore, the present population of these features is probably associated with the most recent impact craters on the planet. Furthermore, the size/frequency distribution of craters with parabolic features is virtually identical to that of the total crater population. This suggests that there has been little loss of small parabolic features compared to large ones, otherwise there should be a significant and systematic paucity of craters with parabolic features with decreasing size compared to the total crater population. Whatever is erasing the parabolic features apparently does so uniformly regardless of the areal extent of the deposit. The lifetime of parabolic features and the eolian erosion rate on Venus can be estimated from the average age of the surface and the present population of parabolic features.

  4. Habitat Heterogeneity Variably Influences Habitat Selection by Wild Herbivores in a Semi-Arid Tropical Savanna Ecosystem

    PubMed Central

    Muposhi, Victor K.; Gandiwa, Edson; Chemura, Abel; Bartels, Paul; Makuza, Stanley M.; Madiri, Tinaapi H.

    2016-01-01

    An understanding of the habitat selection patterns by wild herbivores is critical for adaptive management, particularly towards ecosystem management and wildlife conservation in semi arid savanna ecosystems. We tested the following predictions: (i) surface water availability, habitat quality and human presence have a strong influence on the spatial distribution of wild herbivores in the dry season, (ii) habitat suitability for large herbivores would be higher compared to medium-sized herbivores in the dry season, and (iii) spatial extent of suitable habitats for wild herbivores will be different between years, i.e., 2006 and 2010, in Matetsi Safari Area, Zimbabwe. MaxEnt modeling was done to determine the habitat suitability of large herbivores and medium-sized herbivores. MaxEnt modeling of habitat suitability for large herbivores using the environmental variables was successful for the selected species in 2006 and 2010, except for elephant (Loxodonta africana) for the year 2010. Overall, large herbivores probability of occurrence was mostly influenced by distance from rivers. Distance from roads influenced much of the variability in the probability of occurrence of medium-sized herbivores. The overall predicted area for large and medium-sized herbivores was not different. Large herbivores may not necessarily utilize larger habitat patches over medium-sized herbivores due to the habitat homogenizing effect of water provisioning. Effect of surface water availability, proximity to riverine ecosystems and roads on habitat suitability of large and medium-sized herbivores in the dry season was highly variable thus could change from one year to another. We recommend adaptive management initiatives aimed at ensuring dynamic water supply in protected areas through temporal closure and or opening of water points to promote heterogeneity of wildlife habitats. PMID:27680673

  5. Individual analyses of Lévy walk in semi-free ranging Tonkean macaques (Macaca tonkeana).

    PubMed

    Sueur, Cédric; Briard, Léa; Petit, Odile

    2011-01-01

    Animals adapt their movement patterns to their environment in order to maximize their efficiency when searching for food. The Lévy walk and the Brownian walk are two types of random movement found in different species. Studies have shown that these random movements can switch from a Brownian to a Lévy walk according to the size distribution of food patches. However no study to date has analysed how characteristics such as sex, age, dominance or body mass affect the movement patterns of an individual. In this study we used the maximum likelihood method to examine the nature of the distribution of step lengths and waiting times and assessed how these distributions are influenced by the age and the sex of group members in a semi free-ranging group of ten Tonkean macaques. Individuals highly differed in their activity budget and in their movement patterns. We found an effect of age and sex of individuals on the power distribution of their step lengths and of their waiting times. The males and old individuals displayed a higher proportion of longer trajectories than females and young ones. As regards waiting times, females and old individuals displayed higher rates of long stationary periods than males and young individuals. These movement patterns resembling random walks can probably be explained by the animals moving from one location to other known locations. The power distribution of step lengths might be due to a power distribution of food patches in the enclosure while the power distribution of waiting times might be due to the power distribution of the patch sizes.

  6. Reliability of Different Mark-Recapture Methods for Population Size Estimation Tested against Reference Population Sizes Constructed from Field Data

    PubMed Central

    Grimm, Annegret; Gruber, Bernd; Henle, Klaus

    2014-01-01

    Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK). If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2). Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to evaluate the performance of mark-recapture population size estimators under field conditions, which is essential for selecting an appropriate method and obtaining reliable results in ecology and conservation biology, and thus for sound management. PMID:24896260

  7. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  8. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  9. Comparing fuel reduction treatments for reducing wildfire size and intensity in a boreal forest landscape of northeastern China.

    PubMed

    Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu

    2013-06-01

    Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.

  10. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  11. Probabilistic Sizing and Verification of Space Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit

    2012-07-01

    Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.

  12. Extinction risk and eco-evolutionary dynamics in a variable environment with increasing frequency of extreme events

    PubMed Central

    Vincenzi, Simone

    2014-01-01

    One of the most dramatic consequences of climate change will be the intensification and increased frequency of extreme events. I used numerical simulations to understand and predict the consequences of directional trend (i.e. mean state) and increased variability of a climate variable (e.g. temperature), increased probability of occurrence of point extreme events (e.g. floods), selection pressure and effect size of mutations on a quantitative trait determining individual fitness, as well as the their effects on the population and genetic dynamics of a population of moderate size. The interaction among climate trend, variability and probability of point extremes had a minor effect on risk of extinction, time to extinction and distribution of the trait after accounting for their independent effects. The survival chances of a population strongly and linearly decreased with increasing strength of selection, as well as with increasing climate trend and variability. Mutation amplitude had no effects on extinction risk, time to extinction or genetic adaptation to the new climate. Climate trend and strength of selection largely determined the shift of the mean phenotype in the population. The extinction or persistence of the populations in an ‘extinction window’ of 10 years was well predicted by a simple model including mean population size and mean genetic variance over a 10-year time frame preceding the ‘extinction window’, although genetic variance had a smaller role than population size in predicting contemporary risk of extinction. PMID:24920116

  13. A Cellular Automata Model for the Study of Landslides

    NASA Astrophysics Data System (ADS)

    Liucci, Luisa; Suteanu, Cristian; Melelli, Laura

    2016-04-01

    Power-law scaling has been observed in the frequency distribution of landslide sizes in many regions of the world, for landslides triggered by different factors, and in both multi-temporal and post-event datasets, thus indicating the universal character of this property of landslides and suggesting that the same mechanisms drive the dynamics of mass wasting processes. The reasons for the scaling behavior of landslide sizes are widely debated, since their understanding would improve our knowledge of the spatial and temporal evolution of this phenomenon. Self-Organized Critical (SOC) dynamics and the key role of topography have been suggested as possible explanations. The scaling exponent of the landslide size-frequency distribution defines the probability of landslide magnitudes and it thus represents an important parameter for hazard assessment. Therefore, another - still unanswered - important question concerns the factors on which its value depends. This paper investigates these issues using a Cellular Automata (CA) model. The CA uses a real topographic surface acquired from a Digital Elevation Model to represent the initial state of the system, where the states of cells are defined in terms of altitude. The stability criterion is based on the slope gradient. The system is driven to instability through a temporal decrease of the stability condition of cells, which may be thought of as representing the temporal weakening of soil caused by factors like rainfall. A transition rule defines the way in which instabilities lead to discharge from unstable cells to the neighboring cells, deciding upon the landslide direction and the quantity of mass involved. Both the direction and the transferred mass depend on the local topographic features. The scaling properties of the area-frequency distributions of the resulting landslide series are investigated for several rates of weakening and for different time windows, in order to explore the response of the system to model parameters, and its temporal behavior. Results show that the model reproduces the scaling behavior of real landslide areas; while the value of the scaling exponent is stable over time, it linearly decreases with increasing rate of weakening. This suggests that it is the intensity of the triggering mechanism rather than its duration that affects the probability of landslide magnitudes. A quantitative relationship between the scaling exponent of the area frequency distribution of the generated landslides, on one hand, and the changes regarding the topographic surface affected by landslides, on the other hand, is established. The fact that a similar behavior could be observed in real systems may have useful implications in the context of landslide hazard assessment. These results support the hypotheses that landslides are driven by SOC dynamics, and that topography plays a key role in the scaling properties of their size distribution.

  14. Provenance of cryoconite deposited on the glaciers of the Tibetan Plateau: New insights from Nd-Sr isotopic composition and size distribution

    NASA Astrophysics Data System (ADS)

    Dong, Zhiwen; Kang, Shichang; Qin, Dahe; Li, Yang; Wang, Xuejia; Ren, Jiawen; Li, Xiaofei; Yang, Jiao; Qin, Xiang

    2016-06-01

    This study presents the Nd-Sr isotopic compositions and size distributions of cryoconite deposited on the glaciers at different locations on the Tibetan Plateau, in order to trace its source areas and the provenance of long-range transported (LRT) Asian dust on the Tibetan Plateau. The result of scanning electron microscope-energy dispersive X-ray spectrometer analysis indicated that mineral dust particles were dominant in the cryoconite. Most of the cryoconite samples from the Tibetan Plateau indicated different Sr and Nd isotopic composition compared with sand from large deserts (e.g., the Taklimakan and Qaidam deserts). Some cryoconite samples showed very similar Nd-Sr isotopic ratios compared with those of nearby glacier basins (e.g., at Laohugou Glacier No.12, Dongkemadi Glacier, and Shiyi Glacier), indicating the potential input of local crustal dust to cryoconite. The volume-size distribution for the cryoconite particles also indicated bimodal distribution graphs with volume median diameters ranging from 0.57 to 20 µm and from 20 to 100 µm, demonstrating the contribution of both LRT Asian dust and local dust inputs to cryoconite. Based on the particle size distribution, we calculated a mean number ratio of local dust contribution to cryoconite ranging from 0.7% (Baishui Glacier No.1) to 17.6% (Shiyi Glacier) on the Tibetan Plateau. In general, the marked difference in the Nd-Sr isotopic ratios of cryoconite compared with those of large deserts probably indicates that materials from the western deserts have not been easily transported to the hinterland of Tibetan Plateau by the Westerlies under the current climatic conditions, and the arid deserts on the Tibetan Plateau are the most likely sources for cryoconite deposition. The resistance of the Tibetan Plateau to the Westerlies may have caused such phenomena, especially for LRT eolian dust transported over the Tibetan Plateau. Thus, this work is of great importance in understanding the large-scale eolian dust transport and climate over the Tibetan Plateau.

  15. Provenance of cryoconite deposited on the glaciers of the Tibetan Plateau: new insights from Nd-Sr isotopic composition and size distribution

    NASA Astrophysics Data System (ADS)

    Dong, Z.

    2016-12-01

    This study presents the Nd-Sr isotopic compositions and size distributions of cryoconite deposited on the glaciers at different locations on the Tibetan Plateau, in order to trace its source areas and the provenance of long-range transported (LRT) Asian dust on the Tibetan Plateau. The result of SEM-EDS analysis indicated that mineral dust particles were dominant in the cryoconite. Most of the cryoconite samples from the Tibetan Plateau indicated different Sr and Nd isotopic composition compared with sand from large deserts (e.g., the Taklimakan and Qaidam deserts). Some cryoconite samples showed very similar Nd-Sr isotopic ratios compared with those of nearby glacier basins (e.g., at Laohugou Glacier No.12, Dongkemadi Glacier and Shiyi Glacier), indicating the potential input of local crustal dust to cryoconite. The volume-size distribution for the cryoconite particles also indicated bi-modal distribution graphs with volume median diameters ranging from 0.57 to 20 μm and from 20 to 100 μm, demonstrating the contribution of both LRT Asian dust and local dust inputs to cryoconite. Based on the particle size distribution, we calculated a mean number ratio of local dust contribution to cryoconite ranging from 0.7% (Baishui Glacier No.1) to 17.6% (Shiyi Glacier) on the Tibetan Plateau. In general, the marked difference in the Nd-Sr isotopic ratios of cryoconite compared with those of large deserts probably indicates that, materials from the western deserts have not been easily transported to the hinterland of Tibetan Plateau by the Westerlies under the current climatic conditions, and the arid deserts on the Tibetan Plateau are the most likely sources for cryoconite deposition. The resistance of the Tibetan Plateau to the Westerlies may have caused such phenomena, especially for LRT eolian dust transported over the Tibetan Plateau. Thus, this work is of great importance in understanding the large scale eolian dust transport and climate over the Tibetan Plateau.

  16. Spatio-temporal aerosol particle distributions in the UT/LMS measured by the IAGOS-CARIBIC Observatory

    NASA Astrophysics Data System (ADS)

    Assmann, D. N.; Hermann, M.; Weigelt, A.; Martinsson, B. G.; Brenninkmeijer, C. A. M.; Rauthe-Schoech, A.; van Velthoven, P. J. F.; Boenisch, H.; Zahn, A.

    2016-12-01

    Submicrometer aerosol particles in the upper troposphere and lowermost stratosphere (UT/LMS) influence the Earth`s radiation budget directly and, more important, indirectly, by acting as cloud condensation nuclei and by changing trace gas concentrations through heterogeneous chemical processes. Since 1997, regular in situ UT/LMS aerosol particle measurements have been conducted by the Leibniz Institute for Tropospheric Research, Leipzig, Germany and the University of Lund, Sweden, using the the CARIBIC (now IAGOS-CARIBIC) observatory (www.caribic-atmospheric.com) onboard a passenger aircraft. Submicrometer aerosol particle number concentrations and the aerosol particle size distribution are measured using three condensation particle counters and one optical particle size spectrometer. Moreover, particle elemental composition is determined using an aerosol impactor sampler and post-flight ion beam analysis (PIXE, PESA) of the samples in the laboratory. Based on this unique data set, including meteorological analysis, we present representative spatio-temporal distributions of particle number, surface, volume, and elemental concentrations in an altitude of 8-12 km covering a large fraction of the northern hemisphere. We discuss the measured values in the different size regimes with respect to sources and sinks in different regions. Additionally, we calculated highly resolved latitudinal and longitudinal cross sections of the particle number size distribution, probability density functions and trends in particle number concentrations, but also in elemental composition, determined from our regular measurements over more than a decade. Moreover, we present the seasonality of particle number concentration in an equivelent latitude - potential temperature coordinate framework (see figure). The results are interpreted with respect to aerosol microphysics and transport using CARIBIC trace gas data like ozone and water vapour. The influence of clouds in the troposphere and the different stratosphere-troposphere-exchange processes are clearly visible. Besides providing information about UT/LMS aerosol particle sources, transport, and sinks, these distributions can be used to validate remote sensing instruments or global atmospheric aerosol models.

  17. Spatio-temporal aerosol particle distributions in the UT/LMS measured by the IAGOS-CARIBIC Observatory

    NASA Astrophysics Data System (ADS)

    Assmann, Denise; Hermann, Markus; Weigelt, Andreas; Martinsson, Bengt; Brenninkmeijer, Carl; Rauthe-Schöch, Armin; van Velthoven, Peter; Bönisch, Harald; Zahn, Andreas

    2017-04-01

    Submicrometer aerosol particles in the upper troposphere and lowermost stratosphere (UT/LMS) influence the Earth`s radiation budget directly and, more important, indirectly, by acting as cloud condensation nuclei and by changing trace gas concentrations through heterogeneous chemical processes. Since 1997, regular in situ UT/LMS aerosol particle measurements have been conducted by the Leibniz Institute for Tropospheric Research, Leipzig, Germany and the University of Lund, Sweden, using the the CARIBIC (now IAGOS-CARIBIC) observatory (www.caribic-atmospheric.com) onboard a passenger aircraft. Submicrometer aerosol particle number concentrations and the aerosol particle size distribution are measured using three condensation particle counters and one optical particle size spectrometer. Moreover, particle elemental composition is determined using an aerosol impactor sampler and post-flight ion beam analysis (PIXE, PESA) of the samples in the laboratory. Based on this unique data set, including meteorological analysis, we present representative spatio-temporal distributions of particle number, surface, volume and elemental concentrations in an altitude of 8-12 km covering a large fraction of the northern hemisphere. We discuss the measured values in the different size regimes with respect to sources and sinks in different regions. Additionally, we calculated highly resolved latitudinal and longitudinal cross sections of the particle number size distribution, probability density functions and trends in particle number concentrations, but also in elemental composition, determined from our regular measurements over more than a decade. Moreover, we generated seasonal contour plots for particle number concentrations, the potential temperature, and the equivalent latitude. The results are interpreted with respect to aerosol microphysics and transport using CARIBIC trace gas data like ozone and water vapour. The influence of clouds in the troposphere and the different stratosphere-troposphere-exchange processes is clearly visible. Besides providing information about UT/LMS aerosol particle sources, transport, and sinks, these distributions can be used to validate remote sensing instruments or global atmospheric aerosol models.

  18. Size distribution of dust grains: A problem of self-similarity

    NASA Technical Reports Server (NTRS)

    Henning, TH.; Dorschner, J.; Guertler, J.

    1989-01-01

    Distribution functions describing the results of natural processes frequently show the shape of power laws, e.g., mass functions of stars and molecular clouds, velocity spectrum of turbulence, size distributions of asteroids, micrometeorites and also interstellar dust grains. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all. It could be, e.g., diffusion-limited growth discussed by Sander (1986), who applied the theory of fractal geometry to the classification of non-equilibrium growth processes. He received D=2.4 for diffusion-limited aggregation in 3d-space.

  19. Rock size-frequency distributions on Mars and implications for Mars Exploration Rover landing safety and operations

    NASA Astrophysics Data System (ADS)

    Golombek, M. P.; Haldemann, A. F. C.; Forsberg-Taylor, N. K.; DiMaggio, E. N.; Schroeder, R. D.; Jakosky, B. M.; Mellon, M. T.; Matijevic, J. R.

    2003-10-01

    The cumulative fractional area covered by rocks versus diameter measured at the Pathfinder site was predicted by a rock distribution model that follows simple exponential functions that approach the total measured rock abundance (19%), with a steep decrease in rocks with increasing diameter. The distribution of rocks >1.5 m diameter visible in rare boulder fields also follows this steep decrease with increasing diameter. The effective thermal inertia of rock populations calculated from a simple empirical model of the effective inertia of rocks versus diameter shows that most natural rock populations have cumulative effective thermal inertias of 1700-2100 J m-2 s-0.5 K-1 and are consistent with the model rock distributions applied to total rock abundance estimates. The Mars Exploration Rover (MER) airbags have been successfully tested against extreme rock distributions with a higher percentage of potentially hazardous triangular buried rocks than observed at the Pathfinder and Viking landing sites. The probability of the lander impacting a >1 m diameter rock in the first 2 bounces is <3% and <5% for the Meridiani and Gusev landing sites, respectively, and is <0.14% and <0.03% for rocks >1.5 m and >2 m diameter, respectively. Finally, the model rock size-frequency distributions indicate that rocks >0.1 m and >0.3 m in diameter, large enough to place contact sensor instruments against and abrade, respectively, should be plentiful within a single sol's drive at the Meridiani and Gusev landing sites.

  20. Coagulation of grains in static and collapsing protostellar clouds

    NASA Technical Reports Server (NTRS)

    Weidenschilling, S. J.; Ruzmaikina, T. V.

    1993-01-01

    The wavelength dependence of extinction in the diffuse interstellar medium implies that it is produced by particles of dominant size of approximately 10(exp -5) cm. There is some indication that in the cores of dense molecular clouds, sub-micron grains can coagulate to form larger particles; this process is probably driven by turbulence. The most primitive meteorites (carbonaceous chondrites) are composed of particles with a bimodal size distribution with peaks near 1 micron (matrix) and 1 mm (chondrules). Models for chondrule formation that involve processing of presolar material by chemical reactions or through an accretion shock during infall assume that aggregates of the requisite mass could form before or during collapse. The effectiveness of coagulation during collapse has been disputed; it appears to depend on specific assumptions. The first results of detailed numerical modeling of spatial and temporal variations of particle sizes in presolar clouds, both static and collapsing, is reported in this article.

  1. Sensitive response of a model of symbiotic ecosystem to seasonal periodic drive

    NASA Astrophysics Data System (ADS)

    Rekker, A.; Lumi, N.; Mankin, R.

    2014-11-01

    A symbiotic ecosysytem (metapopulation) is studied by means of the stochastic Lotka-Volterra model with generalized Verhulst self-regulation. The effect of variable environment on the carrying capacities of populations is taken into account as an asymmetric dichotomous noise and as a deterministic periodic stimulus. In the framework of the mean-field theory an explicit self-consistency equation for the system in the long-time limit is presented. Also, expressions for the probability distribution and for the moments of the population size are found. In certain cases the mean population size exhibits large oscillations in time, even if the amplitude of the seasonal environmental drive is small. Particularly, it is shown that the occurrence of large oscillations of the mean population size can be controlled by noise parameters (such as amplitude and correlation time) and by the coupling strength of the symbiotic interaction between species.

  2. Trichotomous noise controlled signal amplification in a generalized Verhulst model

    NASA Astrophysics Data System (ADS)

    Mankin, Romi; Soika, Erkki; Lumi, Neeme

    2014-10-01

    The long-time limit of the probability distribution and statistical moments for a population size are studied by means of a stochastic growth model with generalized Verhulst self-regulation. The effect of variable environment on the carrying capacity of a population is modeled by a multiplicative three-level Markovian noise and by a time periodic deterministic component. Exact expressions for the moments of the population size have been calculated. It is shown that an interplay of a small periodic forcing and colored noise can cause large oscillations of the mean population size. The conditions for the appearance of such a phenomenon are found and illustrated by graphs. Implications of the results on models of symbiotic metapopulations are also discussed. Particularly, it is demonstrated that the effect of noise-generated amplification of an input signal gets more pronounced as the intensity of symbiotic interaction increases.

  3. Colonization of a territory by a stochastic population under a strong Allee effect and a low immigration pressure

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael; Meerson, Baruch

    2015-06-01

    We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average precolonization population size is small, thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate nonzero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasistationary probability distribution of population sizes in the precolonization state and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.

  4. Colonization of a territory by a stochastic population under a strong Allee effect and a low immigration pressure.

    PubMed

    Be'er, Shay; Assaf, Michael; Meerson, Baruch

    2015-06-01

    We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average precolonization population size is small, thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate nonzero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasistationary probability distribution of population sizes in the precolonization state and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.

  5. Incorporating Skew into RMS Surface Roughness Probability Distribution

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  6. Inference of R 0 and Transmission Heterogeneity from the Size Distribution of Stuttering Chains

    PubMed Central

    Blumberg, Seth; Lloyd-Smith, James O.

    2013-01-01

    For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer , but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, ) has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in is detectable. In addition, by allowing for superspreading events, inference of shifts the threshold above which a transmission chain should be considered anomalously large for a given value of (thus reducing the probability of false alarms about pathogen adaptation). Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results. PMID:23658504

  7. Pollinator communities in strawberry crops - variation at multiple spatial scales.

    PubMed

    Ahrenfeldt, E J; Klatt, B K; Arildsen, J; Trandem, N; Andersson, G K S; Tscharntke, T; Smith, H G; Sigsgaard, L

    2015-08-01

    Predicting potential pollination services of wild bees in crops requires knowledge of their spatial distribution within fields. Field margins can serve as nesting and foraging habitats for wild bees and can be a source of pollinators. Regional differences in pollinator community composition may affect this spill-over of bees. We studied how regional and local differences affect the spatial distribution of wild bee species richness, activity-density and body size in crop fields. We sampled bees both from the field centre and at two different types of semi-natural field margins, grass strips and hedges, in 12 strawberry fields. The fields were distributed over four regions in Northern Europe, representing an almost 1100 km long north-south gradient. Even over this gradient, daytime temperatures during sampling did not differ significantly between regions and did therefore probably not impact bee activity. Bee species richness was higher in field margins compared with field centres independent of field size. However, there was no difference between centre and margin in body-size or activity-density. In contrast, bee activity-density increased towards the southern regions, whereas the mean body size increased towards the north. In conclusion, our study revealed a general pattern across European regions of bee diversity, but not activity-density, declining towards the field interior which suggests that the benefits of functional diversity of pollinators may be difficult to achieve through spill-over effects from margins to crop. We also identified dissimilar regional patterns in bee diversity and activity-density, which should be taken into account in conservation management.

  8. Mechanisms of DNA disentangling by type II topoisomerases. Comment on "Disentangling DNA molecules" by Alexander Vologodskii

    NASA Astrophysics Data System (ADS)

    Yan, Jie

    2016-09-01

    In this article [1] Dr. Vologodskii presents a comprehensive discussion on the mechanisms by which the type II topoisomerases unknot/disentangle DNA molecules. It is motivated by a mysterious capability of the nanometer-size enzymes to keep the steady-state probability of DNA entanglement/knot almost two orders of magnitude below that expected from thermal equilibrium [2-5]. In spite of obvious functional advantages of the enzymes, it raises a question regarding how such high efficiency could be achieved. The off-equilibrium steady state distribution of DNA topology is powered by ATP consumption. However, it remains unclear how this energy is utilized to bias the distribution toward disentangled/unknotted topological states of DNA.

  9. Canopy architecture of a walnut orchard

    NASA Technical Reports Server (NTRS)

    Ustin, Susan L.; Martens, Scott N.; Vanderbilt, Vern C.

    1991-01-01

    A detailed dataset describing the canopy geometry of a walnut orchard was acquired to support testing and comparison of the predictions of canopy microwave and optical inversion models. Measured canopy properties included the quantity, size, and orientation of stems, leaves, and fruit. Eight trees receiving 100 percent of estimated potential evapotranspiration water use and eight trees receiving 33 percent of potential water use were measured. The vertical distributions of stem, leaf, and fruit properties are presented with respect to irrigation treatment. Zenith and probability distributions for stems and leaf normals are presented. These data show that, after two years of reduced irrigation, the trees receiving only 33 percent of their potential water requirement had reduced fruit yields, lower leaf area index, and altered allocation of biomass within the canopy.

  10. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  11. Integrity of Ceramic Parts Predicted When Loads and Temperatures Fluctuate Over Time

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2004-01-01

    Brittle materials are being used, and being considered for use, for a wide variety of high performance applications that operate in harsh environments, including static and rotating turbine parts for unmanned aerial vehicles, auxiliary power units, and distributed power generation. Other applications include thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and microelectromechanical systems (MEMS). In order for these high-technology ceramics to be used successfully for structural applications that push the envelope of materials capabilities, design engineers must consider that brittle materials are designed and analyzed differently than metallic materials. Unlike ductile metals, brittle materials display a stochastic strength response because of the combination of low fracture toughness and the random nature of the size, orientation, and distribution of inherent microscopic flaws. This plus the fact that the strength of a component under load may degrade over time because of slow crack growth means that a probabilistic-based life-prediction methodology must be used when the tradeoffs of failure probability, performance, and useful life are being optimized. The CARES/Life code (which was developed at the NASA Glenn Research Center) predicts the probability of ceramic components failing from spontaneous catastrophic rupture when these components are subjected to multiaxial loading and slow crack growth conditions. Enhancements to CARES/Life now allow for the component survival probability to be calculated when loading and temperature vary over time.

  12. Role of social environment and social clustering in spread of opinions in coevolving networks.

    PubMed

    Malik, Nishant; Mucha, Peter J

    2013-12-01

    Taking a pragmatic approach to the processes involved in the phenomena of collective opinion formation, we investigate two specific modifications to the coevolving network voter model of opinion formation studied by Holme and Newman [Phys. Rev. E 74, 056108 (2006)]. First, we replace the rewiring probability parameter by a distribution of probability of accepting or rejecting opinions between individuals, accounting for heterogeneity and asymmetric influences in relationships between individuals. Second, we modify the rewiring step by a path-length-based preference for rewiring that reinforces local clustering. We have investigated the influences of these modifications on the outcomes of simulations of this model. We found that varying the shape of the distribution of probability of accepting or rejecting opinions can lead to the emergence of two qualitatively distinct final states, one having several isolated connected components each in internal consensus, allowing for the existence of diverse opinions, and the other having a single dominant connected component with each node within that dominant component having the same opinion. Furthermore, more importantly, we found that the initial clustering in the network can also induce similar transitions. Our investigation also indicates that these transitions are governed by a weak and complex dependence on system size. We found that the networks in the final states of the model have rich structural properties including the small world property for some parameter regimes.

  13. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  14. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  15. Effects of dynamical grouping on cooperation in N-person evolutionary snowdrift game

    NASA Astrophysics Data System (ADS)

    Ji, M.; Xu, C.; Hui, P. M.

    2011-09-01

    A population typically consists of agents that continually distribute themselves into different groups at different times. This dynamic grouping has recently been shown to be essential in explaining many features observed in human activities including social, economic, and military activities. We study the effects of dynamic grouping on the level of cooperation in a modified evolutionary N-person snowdrift game. Due to the formation of dynamical groups, the competition takes place in groups of different sizes at different times and players of different strategies are mixed by the grouping dynamics. It is found that the level of cooperation is greatly enhanced by the dynamic grouping of agents, when compared with a static population of the same size. As a parameter β, which characterizes the relative importance of the reward and cost, increases, the fraction of cooperative players fC increases and it is possible to achieve a fully cooperative state. Analytically, we present a dynamical equation that incorporates the effects of the competing game and group size distribution. The distribution of cooperators in different groups is assumed to be a binomial distribution, which is confirmed by simulations. Results from the analytic equation are in good agreement with numerical results from simulations. We also present detailed simulation results of fC over the parameter space spanned by the probabilities of group coalescence νm and group fragmentation νp in the grouping dynamics. A high νm and low νp promotes cooperation, and a favorable reward characterized by a high β would lead to a fully cooperative state.

  16. The Interannual Stability of Cumulative Frequency Distributions for Convective System Size and Intensity

    NASA Technical Reports Server (NTRS)

    Mohr, Karen I.; Molinari, John; Thorncroft, Chris

    2009-01-01

    The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from Tropical Rainfall Measuring Mission (TRMM) data as a cluster of pixels with an 85-GHz polarization-corrected brightness temperature below 255 K and with an area of at least 64 square kilometers. The study database consisted of convective systems in West Africa from May to September 1998-2007, and in the western Pacific from May to November 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences between the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Subsetting the database revealed some sensitivity in distribution shape to the size of the sampling area, the length of the sample period, and the climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is either wetter or drier than normal.

  17. Uniform deposition of size-selected clusters using Lissajous scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less

  18. Petroleum-resource appraisal and discovery rate forecasting in partially explored regions

    USGS Publications Warehouse

    Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.

    1980-01-01

    PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.

  19. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  20. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

Top