Sample records for distribution probability density

  1. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  2. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  3. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  4. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  5. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  6. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  7. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  8. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  9. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  10. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  11. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  12. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  13. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  14. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  15. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  16. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  17. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  18. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  19. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  20. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  1. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  2. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  3. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  4. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  5. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  7. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  8. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  9. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  10. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  11. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  12. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  13. A Non-Parametric Probability Density Estimator and Some Applications.

    DTIC Science & Technology

    1984-05-01

    distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59

  14. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  15. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  16. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  17. Using the Pearson Distribution for Synthesis of the Suboptimal Algorithms for Filtering Multi-Dimensional Markov Processes

    NASA Astrophysics Data System (ADS)

    Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.

    2015-08-01

    We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.

  18. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  19. On Interpreting and Extracting Information from the Cumulative Distribution Function Curve: A New Perspective with Applications

    ERIC Educational Resources Information Center

    Balasooriya, Uditha; Li, Jackie; Low, Chan Kee

    2012-01-01

    For any density function (or probability function), there always corresponds a "cumulative distribution function" (cdf). It is a well-known mathematical fact that the cdf is more general than the density function, in the sense that for a given distribution the former may exist without the existence of the latter. Nevertheless, while the…

  20. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  1. Density distribution function of a self-gravitating isothermal compressible turbulent fluid in the context of molecular clouds ensembles

    NASA Astrophysics Data System (ADS)

    Donkov, Sava; Stefanov, Ivan Z.

    2018-03-01

    We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.

  2. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  3. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  4. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  5. Surface Impact Simulations of Helium Nanodroplets

    DTIC Science & Technology

    2015-06-30

    mechanical delocalization of the individual helium atoms in the droplet and the quan- tum statistical effects that accompany the interchange of identical...incorporates the effects of atomic delocaliza- tion by treating individual atoms as smeared-out probability distributions that move along classical...probability density distributions to give effec- tive interatomic potential energy curves that have zero-point averaging effects built into them [25

  6. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  7. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  8. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  9. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  10. Diffusion of active chiral particles

    NASA Astrophysics Data System (ADS)

    Sevilla, Francisco J.

    2016-12-01

    The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.

  11. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  12. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  13. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  14. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  15. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  16. Bragg-cell receiver study

    NASA Technical Reports Server (NTRS)

    Wilson, Lonnie A.

    1987-01-01

    Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.

  17. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  18. Rotated sigmoid structures in managed uneven-aged northern hardwood stands: a look at the Burr Type III distribution

    Treesearch

    Jeffrey H. Gove; Mark J. Ducey; William B. Leak; Lianjun Zhang

    2008-01-01

    Stand structures from a combined density manipulation and even- to uneven-aged conversion experiment on the Bartlett Experimental Forest (New Hampshire, USA) were examined 25 years after initial treatment for rotated sigmoidal diameter distributions. A comparison was made on these stands between two probability density functions for fitting these residual structures:...

  19. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  20. Stochastic Geomorphology: A Framework for Creating General Principles on Erosion and Sedimentation in River Basins (Invited)

    NASA Astrophysics Data System (ADS)

    Benda, L. E.

    2009-12-01

    Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.

  1. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  2. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  3. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  4. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  5. General Exact Solution to the Problem of the Probability Density for Sums of Random Variables

    NASA Astrophysics Data System (ADS)

    Tribelsky, Michael I.

    2002-07-01

    The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  6. General exact solution to the problem of the probability density for sums of random variables.

    PubMed

    Tribelsky, Michael I

    2002-08-12

    The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  7. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  8. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  9. The propagator of stochastic electrodynamics

    NASA Astrophysics Data System (ADS)

    Cavalleri, G.

    1981-01-01

    The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.

  10. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  11. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  12. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  13. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  14. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  15. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  16. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  17. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  18. The role of lower-hybrid-wave collapse in the auroral ionosphere.

    PubMed

    Schuck, P W; Ganguli, G I; Kintner, P M

    2002-08-05

    In regions where lower-hybrid solitary structures (LHSS) are observed, the character of auroral lower-hybrid turbulence (LHT) (0-20 kHz) is investigated using the amplitude probability distribution of the electric field. The observed probability distributions are accurately described by a Rayleigh distribution with two degrees of freedom. The statistics of the LHT exhibit no evidence of the global modulational instability or self-similar wave collapse. We conclude that nucleation and resonant scattering in preexisting density depletions are the processes responsible for LHSS in auroral LHT.

  19. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  1. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  2. Weak gravitational lensing effects on the determination of Omega_mega_m and Omega_mega Lambda from SNeIa

    NASA Astrophysics Data System (ADS)

    Valageas, P.

    2000-02-01

    In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.

  3. Statistical analysis of dislocations and dislocation boundaries from EBSD data.

    PubMed

    Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N

    2017-08-01

    Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  5. Time analysis of volcanic activity on Io by means of plasma observations

    NASA Technical Reports Server (NTRS)

    Mekler, Y.; Eviatar, A.

    1980-01-01

    A model of Io volcanism in which the probability of activity obeys a binomial distribution is presented. Observed values of the electron density obtained over a 3-year period by ground-based spectroscopy are fitted to such a distribution. The best fit is found for a total number of 15 volcanoes with a probability of individual activity at any time of 0.143. The Pioneer 10 ultraviolet observations are reinterpreted as emissions of sulfur and oxygen ions and are found to be consistent with a plasma much less dense than that observed by the Voyager spacecraft. Late 1978 and the first half of 1979 are shown to be periods of anomalous volcanicity. Rapid variations in electron density are related to enhanced radial diffusion.

  6. Density PDFs of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2012-09-01

    The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  7. Statistical distribution of the vacuum energy density in racetrack Kähler uplift models in string theory

    NASA Astrophysics Data System (ADS)

    Sumitomo, Yoske; Tye, S.-H. Henry; Wong, Sam S. C.

    2013-07-01

    We study a racetrack model in the presence of the leading α'-correction in flux compactification in Type IIB string theory, for the purpose of getting conceivable de-Sitter vacua in the large compactified volume approximation. Unlike the Kähler Uplift model studied previously, the α'-correction is more controllable for the meta-stable de-Sitter vacua in the racetrack case since the constraint on the compactified volume size is very much relaxed. We find that the vacuum energy density Λ for de-Sitter vacua approaches zero exponentially as the volume grows. We also analyze properties of the probability distribution of Λ in this class of models. As in other cases studied earlier, the probability distribution again peaks sharply at Λ = 0. We also study the Racetrack Kähler Uplift model in the Swiss-Cheese type model.

  8. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  9. Multipartite entanglement characterization of a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Costantini, G.; Facchi, P.; Florio, G.; Pascazio, S.

    2007-07-01

    A probability density characterization of multipartite entanglement is tested on the one-dimensional quantum Ising model in a transverse field. The average and second moment of the probability distribution are numerically shown to be good indicators of the quantum phase transition. We comment on multipartite entanglement generation at a quantum phase transition.

  10. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  11. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  12. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  13. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  14. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  15. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  16. Large Scale Data Analysis and Knowledge Extraction in Communication Data

    DTIC Science & Technology

    2017-03-31

    this purpose, we developed a novel method the " Correlation Density Ran!C’ which finds probability density distribution of related frequent event on all...which is called " Correlation Density Rank", is developed to derive the community tree from the network. As in the real world, where a network is...Community Structure in Dynamic Social Networks using the Correlation Density Rank," 2014 ASE BigData/SocialCom/Cybersecurity Conference, Stanford

  17. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  18. Dual Approach To Superquantile Estimation And Applications To Density Fitting

    DTIC Science & Technology

    2016-06-01

    incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of

  19. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  20. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  1. Millimeter-wave Line Ratios and Sub-beam Volume Density Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroy, Adam K.; Gallagher, Molly; Usero, Antonio

    We explore the use of mm-wave emission line ratios to trace molecular gas density when observations integrate over a wide range of volume densities within a single telescope beam. For observations targeting external galaxies, this case is unavoidable. Using a framework similar to that of Krumholz and Thompson, we model emission for a set of common extragalactic lines from lognormal and power law density distributions. We consider the median density of gas that produces emission and the ability to predict density variations from observed line ratios. We emphasize line ratio variations because these do not require us to know themore » absolute abundance of our tracers. Patterns of line ratio variations have the potential to illuminate the high-end shape of the density distribution, and to capture changes in the dense gas fraction and median volume density. Our results with and without a high-density power law tail differ appreciably; we highlight better knowledge of the probability density function (PDF) shape as an important area. We also show the implications of sub-beam density distributions for isotopologue studies targeting dense gas tracers. Differential excitation often implies a significant correction to the naive case. We provide tabulated versions of many of our results, which can be used to interpret changes in mm-wave line ratios in terms of adjustments to the underlying density distributions.« less

  2. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  3. Connection between two statistical approaches for the modelling of particle velocity and concentration distributions in turbulent flow: The mesoscopic Eulerian formalism and the two-point probability density function method

    NASA Astrophysics Data System (ADS)

    Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre

    2006-12-01

    The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.

  4. Prediction of fatty acid-binding residues on protein surfaces with three-dimensional probability distributions of interacting atoms.

    PubMed

    Mahalingam, Rajasekaran; Peng, Hung-Pin; Yang, An-Suei

    2014-08-01

    Protein-fatty acid interaction is vital for many cellular processes and understanding this interaction is important for functional annotation as well as drug discovery. In this work, we present a method for predicting the fatty acid (FA)-binding residues by using three-dimensional probability density distributions of interacting atoms of FAs on protein surfaces which are derived from the known protein-FA complex structures. A machine learning algorithm was established to learn the characteristic patterns of the probability density maps specific to the FA-binding sites. The predictor was trained with five-fold cross validation on a non-redundant training set and then evaluated with an independent test set as well as on holo-apo pair's dataset. The results showed good accuracy in predicting the FA-binding residues. Further, the predictor developed in this study is implemented as an online server which is freely accessible at the following website, http://ismblab.genomics.sinica.edu.tw/. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A statistical analysis of the distribution of a larval nematode (Anisakis sp.) in the musculature of chum salmon (Oncorhynchus keta - Walbaum)

    USGS Publications Warehouse

    Novotny, A.J.

    1960-01-01

    The one factor which probably contributes the greatest effect on distributional patterns of Anisakis within chum salmon musculature is the total intensity of infection (or population density of Anisakis) in each fish.

  6. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  7. Nonparametric density estimation and optimal bandwidth selection for protein unfolding and unbinding data

    NASA Astrophysics Data System (ADS)

    Bura, E.; Zhmurov, A.; Barsegov, V.

    2009-01-01

    Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.

  8. Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.

    PubMed

    Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo

    2017-12-20

    In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.

  9. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  10. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  11. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  12. Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Errington, Jeffrey R.

    2003-06-01

    An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.

  13. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  14. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  15. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  16. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    PubMed Central

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129

  17. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  18. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  19. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  20. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  1. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  2. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  3. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  4. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  5. Statistical analysis and modeling of intermittent transport events in the tokamak scrape-off layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Johan, E-mail: anderson.johan@gmail.com; Halpern, Federico D.; Ricci, Paolo

    The turbulence observed in the scrape-off-layer of a tokamak is often characterized by intermittent events of bursty nature, a feature which raises concerns about the prediction of heat loads on the physical boundaries of the device. It appears thus necessary to delve into the statistical properties of turbulent physical fields such as density, electrostatic potential, and temperature, focusing on the mathematical expression of tails of the probability distribution functions. The method followed here is to generate statistical information from time-traces of the plasma density stemming from Braginskii-type fluid simulations and check this against a first-principles theoretical model. The analysis ofmore » the numerical simulations indicates that the probability distribution function of the intermittent process contains strong exponential tails, as predicted by the analytical theory.« less

  6. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  7. Mean, covariance, and effective dimension of stochastic distributed delay dynamics

    NASA Astrophysics Data System (ADS)

    René, Alexandre; Longtin, André

    2017-11-01

    Dynamical models are often required to incorporate both delays and noise. However, the inherently infinite-dimensional nature of delay equations makes formal solutions to stochastic delay differential equations (SDDEs) challenging. Here, we present an approach, similar in spirit to the analysis of functional differential equations, but based on finite-dimensional matrix operators. This results in a method for obtaining both transient and stationary solutions that is directly amenable to computation, and applicable to first order differential systems with either discrete or distributed delays. With fewer assumptions on the system's parameters than other current solution methods and no need to be near a bifurcation, we decompose the solution to a linear SDDE with arbitrary distributed delays into natural modes, in effect the eigenfunctions of the differential operator, and show that relatively few modes can suffice to approximate the probability density of solutions. Thus, we are led to conclude that noise makes these SDDEs effectively low dimensional, which opens the possibility of practical definitions of probability densities over their solution space.

  8. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  9. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  10. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.

  11. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  12. Electron Trap Energy Distribution in ALD Al2O3, LaAl4Ox, and GdyAl2-yO3 Layers on Silicon

    NASA Astrophysics Data System (ADS)

    Wang, W. C.; Badylevich, M.; Adelmann, C.; Swerts, J.; Kittl, J. A.; Afanas'ev, V. V.

    2012-12-01

    The energy distribution of electron trap density in atomic layer deposited Al2O3, LaAl4Ox and GdyAl2-yO3 insulating layers was studied by using the exhaustive photodepopulation spectroscopy. Upon filling the traps by electron tunneling from Si substrate, a broad energy distribution of trap levels in the energy range 2-4 eV is found in all studied insulators with trap densities in the range of 1012 cm-2eV-1. The incorporation of La and Gd cations reduces the trap density in aluminate layers as compared to Al2O3. Crystallization of the insulator by the post-deposition annealing is found to increase the trap density while the energy distribution remains unchanged. The similar trap spectra in the Al2O3 and La or Gd aluminate layers suggest the common nature of the traps, probably originating from imperfections in the AlOx sub-network.

  13. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  14. Three statistical models for estimating length of stay.

    PubMed Central

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532

  15. Three statistical models for estimating length of stay.

    PubMed

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.

  16. Does probability of occurrence relate to population dynamics?

    USGS Publications Warehouse

    Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

    2014-01-01

    Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.

  17. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  18. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  19. Distribution, density, and biomass of introduced small mammals in the southern mariana islands

    USGS Publications Warehouse

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Although it is generally accepted that introduced small mammals have detrimental effects on island ecology, our understanding of these effects is frequently limited by incomplete knowledge of small mammal distribution, density, and biomass. Such information is especially critical in the Mariana Islands, where small mammal density is inversely related to effectiveness of Brown Tree Snake (Boiga irregularis) control tools, such as mouse-attractant traps. We used mark-recapture sampling to determine introduced small mammal distribution, density, and biomass in the major habitats of Guam, Rota, Saipan, and Tinian, including grassland, Leucaena forest, and native limestone forest. Of the five species captured, Rattus diardii (sensu Robins et al. 2007) was most common across habitats and islands. In contrast, Mus musculus was rarely captured at forested sites, Suncus murinus was not captured on Rota, and R. exulans and R. norvegicus captures were uncommon. Modeling indicated that neophobia, island, sex, reproductive status, and rain amount influenced R. diardii capture probability, whereas time, island, and capture heterogeneity influenced S. murinus and M. musculus capture probability. Density and biomass were much greater on Rota, Saipan, and Tinian than on Guam, most likely a result of Brown Tree Snake predation pressure on the latter island. Rattus diardii and M. musculus density and biomass were greatest in grassland, whereas S. murinus density and biomass were greatest in Leucaena forest. The high densities documented during this research suggest that introduced small mammals (especially R. diardii) are impacting abundance and diversity of the native fauna and flora of the Mariana Islands. Further, Brown Tree Snake control and management tools that rely on mouse attractants will be less effective on Rota, Saipan, and Tinian than on Guam. If the Brown Tree Snake becomes established on these islands, high-density introduced small mammal populations will likely facilitate and support a high-density Brown Tree Snake population, even as native species are reduced or extirpated. ?? 2009 by University of Hawai'i Press All rights reserved.

  20. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  1. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  2. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins.

    PubMed

    Nguyen, Bao Linh; Pettitt, B Montgomery

    2015-04-14

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations.

  3. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  4. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  5. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  6. Use of the Weibull function to predict future diameter distributions from current plot data

    Treesearch

    Quang V. Cao

    2012-01-01

    The Weibull function has been widely used to characterize diameter distributions in forest stands. The future diameter distribution of a forest stand can be predicted by use of a Weibull probability density function from current inventory data for that stand. The parameter recovery approach has been used to “recover” the Weibull parameters from diameter moments or...

  7. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  8. Shallow slip amplification and enhanced tsunami hazard unravelled by dynamic simulations of mega-thrust earthquakes

    PubMed Central

    Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.

    2016-01-01

    The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733

  9. Statistics of velocity gradients in two-dimensional Navier-Stokes and ocean turbulence.

    PubMed

    Schorghofer, Norbert; Gille, Sarah T

    2002-02-01

    Probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two-dimensional Navier-Stokes equations. Ocean data are derived from TOPEX satellite altimeter measurements. The simulations use rapid forcing on large scales, characteristic of surface winds. The probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, although they differ from unforced simulations reported elsewhere. The distribution and cross correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics.

  10. Derived distribution of floods based on the concept of partial area coverage with a climatic appeal

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro

    2000-02-01

    A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.

  11. Fish community changes in the St. Louis River estuary, Lake Superior, 1989-1996: Is it ruffe or population dynamics?

    USGS Publications Warehouse

    Bronte, Charles R.; Evrard, Lori M.; Brown, William P.; Mayo, Kathleen R.; Edwards, Andrew J.

    1998-01-01

    Ruffe (Gymnocephalus cernuus) have been implicated in density declines of native species through egg predation and competition for food in some European waters where they were introduced. Density estimates for ruffe and principal native fishes in the St. Louis River estuary (western Lake Superior) were developed for 1989 to 1996 to measure changes in the fish community in response to an unintentional introduction of ruffe. During the study, ruffe density increased and the densities of several native species decreased. The reductions of native stocks to the natural population dynamics of the same species from Chequamegon Bay, Lake Superior (an area with very few ruffe) were developed, where there was a 24-year record of density. Using these data, short- and long-term variations in catch and correlations among species within years were compared, and species-specific distributions were developed of observed trends in abundance of native fishes in Chequamegon Bay indexed by the slopes of densities across years. From these distributions and our observed trend-line slopes from the St. Louis River, probabilities of measuring negative change at the magnitude observed in the St. Louis River were estimated. Compared with trends in Chequamegon Bay, there was a high probability of obtaining the negative slopes measured for most species, which suggests natural population dynamics could explain, the declines rather than interactions with ruffe. Variable recruitment, which was not related to ruffe density, and associated density-dependent changes in mortality likely were responsible for density declines of native species.

  12. Global Distribution of Density Irregularities in the Equatorial Ionosphere

    NASA Technical Reports Server (NTRS)

    Kil, Hyosub; Heelis, R. A.

    1998-01-01

    We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.

  13. A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter

    DTIC Science & Technology

    2016-06-01

    Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience

  14. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  15. IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Khee-Gan; Hennawi, Joseph F.; Spergel, David N.

    2015-02-01

    The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density,more » T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.« less

  16. Characterization of Cloud Water-Content Distribution

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  17. Landscape-scale distribution and density of raptor populations wintering in anthropogenic-dominated desert landscapes

    USGS Publications Warehouse

    Duerr, Adam E.; Miller, Tricia A.; Cornell Duerr, Kerri L; Lanzone, Michael J.; Fesnock, Amy; Katzner, Todd E.

    2015-01-01

    Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such wildlife are often lacking. We surveyed for predatory birds in the Sonoran and Mojave Deserts of southern California, USA, in an area designated for protection under the “Desert Renewable Energy Conservation Plan”, to determine how these birds are distributed across the landscape and how this distribution is affected by existing development. We developed species-specific models of resight probability to adjust estimates of abundance and density of each individual common species. Second, we developed combined-species models of resight probability for common and rare species so that we could make use of sparse data on the latter. We determined that many common species, such as red-tailed hawks, loggerhead shrikes, and especially common ravens, are associated with human development and likely subsidized by human activity. Species-specific and combined-species models of resight probability performed similarly, although the former model type provided higher quality information. Comparing abundance estimates with past surveys in the Mojave Desert suggests numbers of predatory birds associated with human development have increased while other sensitive species not associated with development have decreased. This approach gave us information beyond what we would have collected by focusing either on common or rare species, thus it provides a low-cost framework for others conducting surveys in similar desert environments outside of California.

  18. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  19. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  20. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  1. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  2. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  3. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  4. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  5. A product Pearson-type VII density distribution

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2008-01-01

    The Pearson-type VII distributions (containing the Student's t distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.

  6. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  7. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  8. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  9. Analytical tools and isolation of TOF events

    NASA Technical Reports Server (NTRS)

    Wolf, H.

    1974-01-01

    Analytical tools are presented in two reports. The first is a probability analysis of the orbital distribution of events in relation to dust flux density observed in Pioneer 8 and 9 distributions. A distinction is drawn between asymmetries caused by random fluctuations and systematic variations, by calculating the probability of any particular asymmetry. The second article discusses particle trajectories for a repulsive force field. The force on a particle due to solar radiation pressure is directed along the particle's radius vector, from the sun, and is inversely proportional to its distance from the sun. Equations of motion which describe both solar radiation pressure and gravitational attraction are presented.

  10. On the continuity of the stationary state distribution of DPCM

    NASA Astrophysics Data System (ADS)

    Naraghi-Pour, Morteza; Neuhoff, David L.

    1990-03-01

    Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.

  11. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins

    PubMed Central

    Nguyen, Bao Linh; Pettitt, B. Montgomery

    2015-01-01

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations. PMID:26388706

  12. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  13. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  14. A fully traits-based approach to modeling global vegetation distribution.

    PubMed

    van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M

    2014-09-23

    Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.

  15. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  16. Topology of two-dimensional turbulent flows of dust and gas

    NASA Astrophysics Data System (ADS)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, St<1 . We solve for the particles using both a Lagrangian and an Eulerian approach (with a shock-capturing scheme). In the latter, the particles are described by a dust-density field and a dust-velocity field. We find the following: the dust-density field in our Eulerian simulations has the same correlation dimension d2 as obtained from the clustering of particles in the Lagrangian simulations for St<1 ; the cumulative probability distribution function of the dust density coarse grained over a scale r , in the inertial range, has a left tail with a power-law falloff indicating the presence of voids; the energy spectrum of the dust velocity has a power-law range with an exponent that is the same as the gas-velocity spectrum except at very high Fourier modes; the compressibility of the dust-velocity field is proportional to St2. We quantify the topological properties of the dust velocity and the gas velocity through their gradient matrices, called A and B , respectively. Our DNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  17. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  18. Anurans in a Subarctic Tundra Landscape Near Cape Churchill, Manitoba

    USGS Publications Warehouse

    Reiter, M.E.; Boal, C.W.; Andersen, D.E.

    2008-01-01

    Distribution, abundance, and habitat relationships of anurans inhabiting subarctic regions are poorly understood, and anuran monitoring protocols developed for temperate regions may not be applicable across large roadless areas of northern landscapes. In addition, arctic and subarctic regions of North America are predicted to experience changes in climate and, in some areas, are experiencing habitat alteration due to high rates of herbivory by breeding and migrating waterfowl. To better understand subarctic anuran abundance, distribution, and habitat associations, we conducted anuran calling surveys in the Cape Churchill region of Wapusk National Park, Manitoba, Canada, in 2004 and 2005. We conducted surveys along ~l-km transects distributed across three landscape types (coastal tundra, interior sedge meadow-tundra, and boreal forest-tundra interface) to estimate densities and probabilities of detection of Boreal Chorus Frogs (Pseudacris maculata) and Wood Frogs (Lithobates sylvaticus). We detected a Wood Frog or Boreal Chorus Frog on 22 (87%) of 26 transects surveyed, but probability of detection varied between years and species and among landscape types. Estimated densities of both species increased from the coastal zone inland toward the boreal forest edge. Our results suggest anurans occur across all three landscape types in our study area, but that species-specific spatial patterns exist in their abundances. Considerations for both spatial and temporal variation in abundance and detection probability need to be incorporated into surveys and monitoring programs for subarctic anurans.

  19. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  20. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  1. Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith

    NASA Astrophysics Data System (ADS)

    Etter, Tom; Noyes, H. Pierre

    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

  2. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  3. Mode switching in volcanic seismicity: El Hierro 2011-2013

    NASA Astrophysics Data System (ADS)

    Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.

    2016-05-01

    The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.

  4. Exploiting vibrational resonance in weak-signal detection

    NASA Astrophysics Data System (ADS)

    Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek

    2017-08-01

    In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.

  5. Exploiting vibrational resonance in weak-signal detection.

    PubMed

    Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek

    2017-08-01

    In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.

  6. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  7. The structure and statistics of interstellar turbulence

    NASA Astrophysics Data System (ADS)

    Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.

    2017-06-01

    We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.

  8. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  9. Binary data corruption due to a Brownian agent

    NASA Astrophysics Data System (ADS)

    Newman, T. J.; Triampo, Wannapong

    1999-05-01

    We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.

  10. Comparing neuronal spike trains with inhomogeneous Poisson distribution: evaluation procedure and experimental application in cases of cyclic activity.

    PubMed

    Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino

    2005-11-30

    A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.

  11. How likely are constituent quanta to initiate inflation?

    DOE PAGES

    Berezhiani, Lasha; Trodden, Mark

    2015-08-06

    In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  13. Mean Field Approach to the Giant Wormhole Problem

    NASA Astrophysics Data System (ADS)

    Gamba, A.; Kolokolov, I.; Martellini, M.

    We introduce a gaussian probability density for the space-time distribution of worm-holes, thus taking effectively into account wormhole interaction. Using a mean-field approximation for the free energy, we show that giant wormholes are probabilistically suppressed in a homogenous isotropic “large” universe.

  14. A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers

    PubMed Central

    Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin

    2018-01-01

    In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348

  15. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  16. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  17. Bayesian predictive power: choice of prior and some recommendations for its use as probability of success in drug development.

    PubMed

    Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus

    2016-09-01

    Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Chaotic density fluctuations in L-mode plasmas of the DIII-D tokamak

    DOE PAGES

    Maggs, J. E.; Rhodes, Terry L.; Morales, G. J.

    2015-03-05

    Analysis of the time series obtained with the Doppler backscattering system (DBS) in the DIII-D tokamak shows that intermediate wave number plasma density fluctuations in low confinement (L-mode) tokamak plasmas are chaotic. Here, the supporting evidence is based on the shape of the power spectrum; the location of the signal in the complexity-entropy plane (C-H plane); and the population of the corresponding Bandt-Pompe probability distributions.

  19. Relative mass distributions of neutron-rich thermally fissile nuclei within a statistical model

    NASA Astrophysics Data System (ADS)

    Kumar, Bharat; Kannan, M. T. Senthil; Balasubramaniam, M.; Agrawal, B. K.; Patra, S. K.

    2017-09-01

    We study the binary mass distribution for the recently predicted thermally fissile neutron-rich uranium and thorium nuclei using a statistical model. The level density parameters needed for the study are evaluated from the excitation energies of the temperature-dependent relativistic mean field formalism. The excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. As representative cases, we present the results for the binary yields of 250U and 254Th. The relative yields are presented for three different temperatures: T =1 , 2, and 3 MeV.

  20. A Case Series of the Probability Density and Cumulative Distribution of Laryngeal Disease in a Tertiary Care Voice Center.

    PubMed

    de la Fuente, Jaime; Garrett, C Gaelyn; Ossoff, Robert; Vinson, Kim; Francis, David O; Gelbard, Alexander

    2017-11-01

    To examine the distribution of clinic and operative pathology in a tertiary care laryngology practice. Probability density and cumulative distribution analyses (Pareto analysis) was used to rank order laryngeal conditions seen in an outpatient tertiary care laryngology practice and those requiring surgical intervention during a 3-year period. Among 3783 new clinic consultations and 1380 operative procedures, voice disorders were the most common primary diagnostic category seen in clinic (n = 3223), followed by airway (n = 374) and swallowing (n = 186) disorders. Within the voice strata, the most common primary ICD-9 code used was dysphonia (41%), followed by unilateral vocal fold paralysis (UVFP) (9%) and cough (7%). Among new voice patients, 45% were found to have a structural abnormality. The most common surgical indications were laryngotracheal stenosis (37%), followed by recurrent respiratory papillomatosis (18%) and UVFP (17%). Nearly 55% of patients presenting to a tertiary referral laryngology practice did not have an identifiable structural abnormality in the larynx on direct or indirect examination. The distribution of ICD-9 codes requiring surgical intervention was disparate from that seen in clinic. Application of the Pareto principle may improve resource allocation in laryngology, but these initial results require confirmation across multiple institutions.

  1. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  2. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  3. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  4. Ceres and the terrestrial planets impact cratering record

    NASA Astrophysics Data System (ADS)

    Strom, R. G.; Marchi, S.; Malhotra, R.

    2018-03-01

    Dwarf planet Ceres, the largest object in the Main Asteroid Belt, has a surface that exhibits a range of crater densities for a crater diameter range of 5-300 km. In all areas the shape of the craters' size-frequency distribution is very similar to those of the most ancient heavily cratered surfaces on the terrestrial planets. The most heavily cratered terrain on Ceres covers ∼15% of its surface and has a crater density similar to the highest crater density on <1% of the lunar highlands. This region of higher crater density on Ceres probably records the high impact rate at early times and indicates that the other 85% of Ceres was partly resurfaced after the Late Heavy Bombardment (LHB) at ∼4 Ga. The Ceres cratering record strongly indicates that the period of Late Heavy Bombardment originated from an impactor population whose size-frequency distribution resembles that of the Main Belt Asteroids.

  5. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  6. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  7. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  8. Agricultural pesticide use in California: pesticide prioritization, use densities, and population distributions for a childhood cancer study.

    PubMed Central

    Gunier, R B; Harnly, M E; Reynolds, P; Hertz, A; Von Behren, J

    2001-01-01

    Several studies have suggested an association between childhood cancer and pesticide exposure. California leads the nation in agricultural pesticide use. A mandatory reporting system for all agricultural pesticide use in the state provides information on the active ingredient, amount used, and location. We calculated pesticide use density to quantify agricultural pesticide use in California block groups for a childhood cancer study. Pesticides with similar toxicologic properties (probable carcinogens, possible carcinogens, genotoxic compounds, and developmental or reproductive toxicants) were grouped together for this analysis. To prioritize pesticides, we weighted pesticide use by the carcinogenic and exposure potential of each compound. The top-ranking individual pesticides were propargite, methyl bromide, and trifluralin. We used a geographic information system to calculate pesticide use density in pounds per square mile of total land area for all United States census-block groups in the state. Most block groups (77%) averaged less than 1 pound per square mile of use for 1991-1994 for pesticides classified as probable human carcinogens. However, at the high end of use density (> 90th percentile), there were 493 block groups with more than 569 pounds per square mile. Approximately 170,000 children under 15 years of age were living in these block groups in 1990. The distribution of agricultural pesticide use and number of potentially exposed children suggests that pesticide use density would be of value for a study of childhood cancer. PMID:11689348

  9. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  10. Microdosimetric Modeling of Biological Effectiveness for Boron Neutron Capture Therapy Considering Intra- and Intercellular Heterogeneity in 10B Distribution.

    PubMed

    Sato, Tatsuhiko; Masunaga, Shin-Ichiro; Kumada, Hiroaki; Hamada, Nobuyuki

    2018-01-17

    We here propose a new model for estimating the biological effectiveness for boron neutron capture therapy (BNCT) considering intra- and intercellular heterogeneity in 10 B distribution. The new model was developed from our previously established stochastic microdosimetric kinetic model that determines the surviving fraction of cells irradiated with any radiations. In the model, the probability density of the absorbed doses in microscopic scales is the fundamental physical index for characterizing the radiation fields. A new computational method was established to determine the probability density for application to BNCT using the Particle and Heavy Ion Transport code System PHITS. The parameters used in the model were determined from the measured surviving fraction of tumor cells administrated with two kinds of 10 B compounds. The model quantitatively highlighted the indispensable need to consider the synergetic effect and the dose dependence of the biological effectiveness in the estimate of the therapeutic effect of BNCT. The model can predict the biological effectiveness of newly developed 10 B compounds based on their intra- and intercellular distributions, and thus, it can play important roles not only in treatment planning but also in drug discovery research for future BNCT.

  11. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.

    PubMed

    Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro

    2018-01-01

    The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.

  12. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density

    PubMed Central

    Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro

    2018-01-01

    The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345

  13. Two-dimensional electron density characterisation of arc interruption phenomenon in current-zero phase

    NASA Astrophysics Data System (ADS)

    Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko

    2018-01-01

    Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.

  14. Software Supportability Risk Assessment in OT&E (Operational Test and Evaluation): Literature Review, Current Research Review, and Data Base Assemblage.

    DTIC Science & Technology

    1984-09-28

    variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and

  15. A mathematical model for evolution and SETI.

    PubMed

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  16. Modelling Evolution and SETI Mathematically

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2012-05-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  17. A Mathematical Model for Evolution and SETI

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  18. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  19. STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp

    2015-03-10

    Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less

  20. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  1. Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A

    NASA Astrophysics Data System (ADS)

    Turban, Loïc; Fortin, Jean-Yves

    2018-04-01

    Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.

  2. Kinetic energy as functional of the correlation hole

    NASA Astrophysics Data System (ADS)

    Nalewajski, Roman F.

    2003-01-01

    Using the marginal decomposition of the many-body probability distribution the electronic kinetic energy is expressed as the functional of the electron density and correlation hole. The analysis covers both the molecule as a whole and its constituent subsystems. The importance of the Fisher information for locality is emphasized.

  3. Study on length distribution of ramie fibers

    USDA-ARS?s Scientific Manuscript database

    The extra-long length of ramie fibers and the high variation in fiber length has a negative impact on the spinning processes. In order to better study the feature of ramie fiber length, in this research, the probability density function of the mixture model applied in the characterization of cotton...

  4. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  5. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  6. Diffuse reflection from a stochastically bounded, semi-infinite medium

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Peltoniemi, J. I.; Irvine, W. M.

    1990-01-01

    In order to determine the diffuse reflection from a medium bounded by a rough surface, the problem of radiative transfer in a boundary layer characterized by a statistical distribution of heights is considered. For the case that the surface is defined by a multivariate normal probability density, the propagation probability for rays traversing the boundary layer is derived and, from that probability, a corresponding radiative transfer equation. A solution of the Eddington (two stream) type is found explicitly, and examples are given. The results should be applicable to reflection from the regoliths of solar system bodies, as well as from a rough ocean surface.

  7. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  8. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  9. Structural and mechanical properties of cardiolipin lipid bilayers determined using neutron spin echo, small angle neutron and X-ray scattering, and molecular dynamics simulations

    DOE PAGES

    Pan, Jianjun; Cheng, Xiaolin; Sharp, Melissa; ...

    2014-10-29

    We report that the detailed structural and mechanical properties of a tetraoleoyl cardiolipin (TOCL) bilayer were determined using neutron spin echo (NSE) spectroscopy, small angle neutron and X-ray scattering (SANS and SAXS, respectively), and molecular dynamics (MD) simulations. We used MD simulations to develop a scattering density profile (SDP) model, which was then utilized to jointly refine SANS and SAXS data. In addition to commonly reported lipid bilayer structural parameters, component distributions were obtained, including the volume probability, electron density and neutron scattering length density.

  10. Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite

    NASA Technical Reports Server (NTRS)

    Fisher, D. E.

    1985-01-01

    Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.

  11. Theory after experiment on sensing mechanism of a newly developed sensor molecule: Converging or diverging?

    NASA Astrophysics Data System (ADS)

    Paul, Suvendu; Karar, Monaj; Das, Biswajit; Mallick, Arabinda; Majumdar, Tapas

    2017-12-01

    Fluoride ion sensing mechanism of 3,3‧-bis(indolyl)-4-chlorophenylmethane has been analyzed with density functional and time-dependent density functional theories. Extensive theoretical calculations on molecular geometry & energy, charge distribution, orbital energies & electronic distribution, minima on potential energy surface confirmed strong hydrogen bonded sensor-anion complex with incomplete proton transfer in S0. In S1, strong hydrogen bonding extended towards complete ESDPT. The distinct and single minima on the PES of the sensor-anion complex for both ground and first singlet excited states confirmed the concerted proton transfer mechanism. Present study well reproduced the experimental spectroscopic data and provided ESDPT as probable fluoride sensing mechanism.

  12. The distribution of density in supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Hopkins, Philip F.

    2017-11-01

    We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.

  13. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  14. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  15. Spatial distribution of traffic in a cellular mobile data network

    NASA Astrophysics Data System (ADS)

    Linnartz, J. P. M. G.

    1987-02-01

    The use of integral transforms of the probability density function for the received power to analyze the relation between the spatial distributions of offered and throughout packet traffic in a mobile radio network with Rayleigh fading channels and ALOHA multiple access was assessed. A method to obtain the spatial distribution of throughput traffic from a prescribed spatial distribution of offered traffic is presented. Incoherent and coherent addition of interference signals is considered. The channel behavior for heavy traffic loads is studied. In both the incoherent and coherent case, the spatial distribution of offered traffic required to ensure a prescribed spatially uniform throughput is synthesized numerically.

  16. Electron emission produced by photointeractions in a slab target

    NASA Technical Reports Server (NTRS)

    Thinger, B. E.; Dayton, J. A., Jr.

    1973-01-01

    The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.

  17. Distinguishability notion based on Wootters statistical distance: Application to discrete maps

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.

    2017-08-01

    We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.

  18. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  19. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  20. Spatial distribution of the dagger nematode Xiphinema index and its associated Grapevine fanleaf virus in French vineyard.

    PubMed

    Villate, L; Fievet, V; Hanse, B; Delemarre, F; Plantard, O; Esmenjaud, D; van Helden, M

    2008-08-01

    The nematode Xiphinema index is, economically, the major virus vector in viticulture, transmitting specifically the Grapevine fanleaf virus (GFLV), the most severe grapevine virus disease worldwide. Increased knowledge of the spatial distribution of this nematode, both horizontally and vertically, and of correlative GFLV plant infections, is essential to efficiently control the disease. In two infested blocks of the Bordeaux vineyard, vertical distribution data showed that the highest numbers of individuals occurred at 40 to 110 cm depth, corresponding to the two layers where the highest densities of fine roots were observed. Horizontal distribution based on a 10 x 15 m grid sampling procedure revealed a significant aggregative pattern but no significant neighborhood structure of nematode densities. At a finer scale ( approximately 2 x 2 m), nematode sampling performed in a third block confirmed a significant aggregative pattern, with patches of 6 to 8 m diameter, together with a significant neighborhood structure of nematode densities, thus identifying the relevant sampling scale to describe the nematode distribution. Nematode patches correlate significantly with those of GFLV-infected grapevine plants. Finally, nematode and virus spread were shown to extend preferentially parallel to vine rows, probably due to tillage during mechanical weeding.

  1. Impact of stratospheric aircraft on calculations of nitric acid trihydrate cloud surface area densities using NMC temperatures and 2D model constituent distributions

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Douglass, Anne R.

    1994-01-01

    A parameterization of NAT (nitric acid trihydrate) clouds is developed for use in 2D models of the stratosphere. The parameterization uses model distributions of HNO3 and H2O to determine critical temperatures for NAT formation as a function of latitude and pressure. National Meteorological Center temperature fields are then used to determine monthly temperature frequency distributions, also as a function of latitude and pressure. The fractions of these distributions which fall below the critical temperatures for NAT formation are then used to determine the NAT cloud surface area density for each location in the model grid. By specifying heterogeneous reaction rates as functions of the surface area density, it is then possible to assess the effects of the NAT clouds on model constituent distributions. We also consider the increase in the NAT cloud formation in the presence of a fleet of stratospheric aircraft. The stratospheric aircraft NO(x) and H2O perturbations result in increased HNO3 as well as H2O. This increases the probability of NAT formation substantially, especially if it is assumed that the aircraft perturbations are confined to a corridor region.

  2. Concurrent effects of age class and food distribution on immigration success and population dynamics in a small mammal.

    PubMed

    Rémy, Alice; Le Galliard, Jean-François; Odden, Morten; Andreassen, Harry P

    2014-07-01

    During the settlement stage of dispersal, the outcome of conflicts between residents and immigrants should depend on the social organization of resident populations as well as on individual traits of immigrants, such as their age class, body mass and/or behaviour. We have previously shown that spatial distribution of food influences the social organization of female bank voles (Myodes glareolus). Here, we aimed to determine the relative impact of food distribution and immigrant age class on the success and demographic consequences of female bank vole immigration. We manipulated the spatial distribution of food within populations having either clumped or dispersed food. After a pre-experimental period, we released either adult immigrants or juvenile immigrants, for which we scored sociability and aggressiveness prior to introduction. We found that immigrant females survived less well and moved more between populations than resident females, which suggest settlement costs. However, settled juvenile immigrants had a higher probability to reproduce than field-born juveniles. Food distribution had little effects on the settlement success of immigrant females. Survival and settlement probabilities of immigrants were influenced by adult female density in opposite ways for adult and juvenile immigrants, suggesting a strong adult-adult competition. Moreover, females of higher body mass at release had a lower probability to survive, and the breeding probability of settled immigrants increased with their aggressiveness and decreased with their sociability. Prior to the introduction of immigrants, resident females were more aggregated in the clumped food treatment than in the dispersed food treatment, but immigration reversed this relationship. In addition, differences in growth trajectories were seen during the breeding season, with populations reaching higher densities when adult immigrants were introduced in a plot with dispersed food, or when juvenile immigrants were introduced in a plot with clumped food. These results indicate the relative importance of intrinsic and extrinsic factors on immigration success and demographic consequences of dispersal and are of relevance to conservation actions, such as reinforcement of small populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  3. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  4. Methodology of Calculation the Terminal Settling Velocity Distribution of Spherical Particles for High Values of the Reynold's Number

    NASA Astrophysics Data System (ADS)

    Surowiak, Agnieszka; Brożek, Marian

    2014-03-01

    The particle settling velocity is the feature of separation in such processes as flowing classification and jigging. It characterizes material forwarded to the separation process and belongs to the so-called complex features because it is the function of particle density and size. i.e. the function of two simple features. The affiliation to a given subset is determined by the values of two properties and the distribution of such feature in a sample is the function of distributions of particle density and size. The knowledge about distribution of particle settling velocity in jigging process is as much important factor as knowledge about particle size distribution in screening or particle density distribution in dense media beneficiation. The paper will present a method of determining the distribution of settling velocity in the sample of spherical particles for the turbulent particle motion in which the settling velocity is expressed by the Newton formula. Because it depends on density and size of particle which are random variable of certain distributions, the settling velocity is a random variable. Applying theorems of probability, concerning distributions function of random variables, the authors present general formula of probability density function of settling velocity for the turbulent motion and particularly calculate probability density function for Weibull's forms of frequency functions of particle size and density. Distribution of settling velocity will calculate numerically and perform in graphical form. The paper presents the simulation of calculation of settling velocity distribution on the basis of real distributions of density and projective diameter of particles assuming that particles are spherical. Prędkość opadania ziarna jest cechą rozdziału w takich procesach przeróbki surowców jak klasyfikacja czy wzbogacanie w osadzarce. Cecha ta opisuje materiał kierowany do procesu rozdziału i należy do tzw. cech złożonych, ze względu na to, że jest funkcją dwóch cech prostych, którymi są: wielkość ziarna i gęstość ziarna. Przynależność do określonego podzbioru ziaren jest określona przez wartość dwóch cech, a rozkład tych cech w próbce jest funkcją rozkładów gęstości i wielkości ziarna. Znajomość rozkładu prędkości opadania ziaren w osadzarce jest istotnym parametrem jak znajomość rozkładu wielkości ziarna w procesie przesiewania czy znajomość rozkładu gęstości w procesie wzbogacania w cieczach ciężkich. W artykule przedstawiono metodykę wyliczania rozkładu prędkości opadania ziaren sferycznych w warunkach ruchu turbulentnego wyrażonego przy pomocy równania Newtona. Zarówno gęstość jak i wielkość ziarna są zmiennymi losowymi o określonych rozkładach. W związku z tym prędkość opadania ziarna jako funkcja cech prostych tj. gęstości i wielkości ziarna będzie również zmienną losową o rozkładzie, który jest funkcją rozkładów argumentów prostych. Wykorzystując twierdzenia rachunku prawdopodobieństwa odnoszące się do rozkładów funkcji zmiennych losowych przedstawiono ogólny wzór na funkcję gęstości rozkładu prędkości opadania w warunkach ruchu turbulentnego. Empiryczne rozkłady wielkości i gęstości ziaren aproksymowano rozkładem Weibulla. Rozkład prędkości opadania wyliczono numerycznie i przedstawiono w postaci graficznej. W artykule przedstawiono symulację wyliczania rozkładu prędkości opadania w oparciu o rzeczywiste rozkłady gęstości i średnicy projekcyjnej ziaren zakładając, że ziarna mają kształt sferyczny.

  5. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    NASA Technical Reports Server (NTRS)

    Schwartz, H.-J.

    1976-01-01

    The modeling process of a complex system, based on the calculation and optimization of the system parameters, is complicated in that some parameters can be expressed only as probability distributions. In the present paper, a Monte Carlo technique was used to determine the daily range requirements of an electric road vehicle in the United States from probability distributions of trip lengths, frequencies, and average annual mileage data. The analysis shows that a daily range of 82 miles meets to 95% of the car-owner requirements at all times with the exception of long vacation trips. Further, it is shown that the requirement of a daily range of 82 miles can be met by a (intermediate-level) battery technology characterized by an energy density of 30 to 50 Watt-hours per pound. Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. These results imply that long-term research goals for battery systems should be focused on lower cost and longer service life, rather than on higher energy densities

  6. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  7. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  8. Statistical properties of a filtered Poisson process with additive random noise: distributions, correlations and moment estimation

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; E Garcia, O.; Rypdal, M.

    2017-05-01

    Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.

  9. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  10. Environmental dependence of the galaxy stellar mass function in the Dark Energy Survey Science Verification Data

    DOE PAGES

    Etherington, J.; Thomas, D.; Maraston, C.; ...

    2016-01-04

    Measurements of the galaxy stellar mass function are crucial to understand the formation of galaxies in the Universe. In a hierarchical clustering paradigm it is plausible that there is a connection between the properties of galaxies and their environments. Evidence for environmental trends has been established in the local Universe. The Dark Energy Survey (DES) provides large photometric datasets that enable further investigation of the assembly of mass. In this study we use ~3.2 million galaxies from the (South Pole Telescope) SPT-East field in the DES science verification (SV) dataset. From grizY photometry we derive galaxy stellar masses and absolutemore » magnitudes, and determine the errors on these properties using Monte-Carlo simulations using the full photometric redshift probability distributions. We compute galaxy environments using a fixed conical aperture for a range of scales. We construct galaxy environment probability distribution functions and investigate the dependence of the environment errors on the aperture parameters. We compute the environment components of the galaxy stellar mass function for the redshift range 0.15 < z < 1.05. For z < 0.75 we find that the fraction of massive galaxies is larger in high density environment than in low density environments. We show that the low density and high density components converge with increasing redshift up to z ~ 1.0 where the shapes of the mass function components are indistinguishable. As a result, our study shows how high density structures build up around massive galaxies through cosmic time.« less

  11. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  12. Universal characteristics of fractal fluctuations in prime number distribution

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2014-11-01

    The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.

  13. Ionization compression impact on dense gas distribution and star formation. Probability density functions around H II regions as seen by Herschel

    NASA Astrophysics Data System (ADS)

    Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.

    2014-04-01

    Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  14. Anomalous transport in fluid field with random waiting time depending on the preceding jump length

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Guo-Hua

    2016-11-01

    Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).

  15. The bingo model of survivorship: 1. probabilistic aspects.

    PubMed

    Murphy, E A; Trojak, J E; Hou, W; Rohde, C A

    1981-01-01

    A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.

  16. High-precision simulation of the height distribution for the KPZ equation

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Le Doussal, Pierre; Majumdar, Satya N.; Rosso, Alberto; Schehr, Gregory

    2018-03-01

    The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as 10-1000 in the tails. Both short and long times are investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations. At short times the agreement with the analytical expression is spectacular. We observe that the far left and right tails, with exponents 5/2 and 3/2, respectively, are preserved also in the region of long times. We present some evidence for the predicted non-trivial crossover in the left tail from the 5/2 tail exponent to the cubic tail of the Tracy-Widom distribution, although the details of the full scaling form remain beyond reach.

  17. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  18. Wave turbulence in shallow water models.

    PubMed

    Clark di Leoni, P; Cobelli, P J; Mininni, P D

    2014-06-01

    We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to 2048{2} points. In all simulations, the Froude number varies between 0.015 and 0.05, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, nonlinear waves are nondispersive and the spectrum of potential energy is compatible with ∼k{-2} scaling. For deeper (Boussinesq) flows, the nonlinear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, ∼k{-4/3}. In this latter case, the nonlinear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.

  19. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  20. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  1. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  2. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  3. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  4. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  5. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  6. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  7. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  8. Precipitation Cluster Distributions: Current Climate Storm Statistics and Projected Changes Under Global Warming

    NASA Astrophysics Data System (ADS)

    Quinn, Kevin Martin

    The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.

  9. Integrated seismic stochastic inversion and multi-attributes to delineate reservoir distribution: Case study MZ fields, Central Sumatra Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.

    2017-07-01

    This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.

  10. A statistical treatment of bioassay pour fractions

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack; Hughes, David

    A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.

  11. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  12. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  13. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    PubMed

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.

  14. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  15. Modeling financial markets by the multiplicative sequence of trades

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-12-01

    We introduce the stochastic multiplicative point process modeling trading activity of financial markets. Such a model system exhibits power-law spectral density S(f)∝1/fβ, scaled as power of frequency for various values of β between 0.5 and 2. Furthermore, we analyze the relation between the power-law autocorrelations and the origin of the power-law probability distribution of the trading activity. The model reproduces the spectral properties of trading activity and explains the mechanism of power-law distribution in real markets.

  16. Laser Heating in a Dense Plasma Focus.

    DTIC Science & Technology

    The report is divided in two parts. In the first part an account is given of the measurement of the momentum distribution of the deuterons ejected from a dense plasma focus . The results show the existence of a pronounced non-Maxwellian distribution and a small population of deuterons accelerated to the voltage of the condenser bank. In the second part theoretical calculation of laser heating establish the presence of large density gradient which probably accounts for the large currents detected in such plasmas. (Author)

  17. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  18. Supernova Driving. II. Compressive Ratio in Molecular-cloud Turbulence

    NASA Astrophysics Data System (ADS)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of the compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.

  19. Microdosimetric Analysis Confirms Similar Biological Effectiveness of External Exposure to Gamma-Rays and Internal Exposure to 137Cs, 134Cs, and 131I

    PubMed Central

    Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki

    2014-01-01

    The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099

  20. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  1. Inferring the three-dimensional distribution of dust in the Galaxy with a non-parametric method . Preparing for Gaia

    NASA Astrophysics Data System (ADS)

    Rezaei Kh., S.; Bailer-Jones, C. A. L.; Hanson, R. J.; Fouesneau, M.

    2017-02-01

    We present a non-parametric model for inferring the three-dimensional (3D) distribution of dust density in the Milky Way. Our approach uses the extinction measured towards stars at different locations in the Galaxy at approximately known distances. Each extinction measurement is proportional to the integrated dust density along its line of sight (LoS). Making simple assumptions about the spatial correlation of the dust density, we can infer the most probable 3D distribution of dust across the entire observed region, including along sight lines which were not observed. This is possible because our model employs a Gaussian process to connect all LoS. We demonstrate the capability of our model to capture detailed dust density variations using mock data and simulated data from the Gaia Universe Model Snapshot. We then apply our method to a sample of giant stars observed by APOGEE and Kepler to construct a 3D dust map over a small region of the Galaxy. Owing to our smoothness constraint and its isotropy, we provide one of the first maps which does not show the "fingers of God" effect.

  2. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  3. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  4. Probabilistic distribution and stochastic P-bifurcation of a hybrid energy harvester under colored noise

    NASA Astrophysics Data System (ADS)

    Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.

    2018-03-01

    In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.

  5. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  6. Unfolding the laws of star formation: the density distribution of molecular clouds.

    PubMed

    Kainulainen, Jouni; Federrath, Christoph; Henning, Thomas

    2014-04-11

    The formation of stars shapes the structure and evolution of entire galaxies. The rate and efficiency of this process are affected substantially by the density structure of the individual molecular clouds in which stars form. The most fundamental measure of this structure is the probability density function of volume densities (ρ-PDF), which determines the star formation rates predicted with analytical models. This function has remained unconstrained by observations. We have developed an approach to quantify ρ-PDFs and establish their relation to star formation. The ρ-PDFs instigate a density threshold of star formation and allow us to quantify the star formation efficiency above it. The ρ-PDFs provide new constraints for star formation theories and correctly predict several key properties of the star-forming interstellar medium.

  7. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  8. Hydrodynamic Flow Fluctuations in √sNN = 5:02 TeV PbPbCollisions

    NASA Astrophysics Data System (ADS)

    Castle, James R.

    The collective, anisotropic expansion of the medium created in ultrarelativistic heavy-ion collisions, known as flow, is characterized through a Fourier expansion of the final-state azimuthal particle density. In the Fourier expansion, flow harmonic coefficients vn correspond to shape components in the final-state particle density, which are a consequence of similar spatial anisotropies in the initial-state transverse energy density of a collision. Flow harmonic fluctuations are studied for PbPb collisions at √sNN = 5.02 TeV using the CMS detector at the CERN LHC. Flow harmonic probability distributions p( vn) are obtained using particles with 0.3 < pT < 3.0 GeV/c and ∥eta∥ < 1.0 by removing finite-multiplicity resolution effects from the observed azimuthal particle density through an unfolding procedure. Cumulant elliptic flow harmonics (n = 2) are determined from the moments of the unfolded p(v2) distributions and used to construct observables in 5% wide centrality bins up to 60% that relate to the initial-state spatial anisotropy. Hydrodynamic models predict that fluctuations in the initial-state transverse energy density will lead to a non-Gaussian component in the elliptic flow probability distributions that manifests as a negative skewness. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between the higher-order cumulant elliptic flow harmonics. The unfolded p (v2) distributions are transformed assuming a linear relationship between the initial-state spatial anisotropy and final-state flow and are fitted with elliptic power law and Bessel Gaussian parametrizations to infer information on the nature of initial-state fluctuations. The elliptic power law parametrization is found to provide a more accurate description of the fluctuations than the Bessel-Gaussian parametrization. In addition, the event-shape engineering technique, where events are further divided into classes based on an observed ellipticity, is used to study fluctuation-driven differences in the initial-state spatial anisotropy for a given collision centrality that would otherwise be destroyed by event-averaging techniques. Correlations between the first and second moments of p( vn) distributions and event ellipticity are measured for harmonic orders n = 2 - 4 by coupling event-shape engineering to the unfolding technique.

  9. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    PubMed

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  10. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  11. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  12. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  13. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  14. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  15. Density matrix approach to the hot-electron stimulated photodesorption

    NASA Astrophysics Data System (ADS)

    Kühn, Oliver; May, Volkhard

    1996-07-01

    The dissipative dynamics of the laser-induced nonthermal desorption of small molecules from a metal surface is investigated here. Based on the density matrix formalism a multi-state model is introduced which explicitly takes into account the continuum of electronic states in the metal. Various relaxation mechanisms for the electronic degrees of freedom are shown to govern the desorption dynamics and hence the desorption probability. Particular attention is paid to the modeling of the time dependence of the electron energy distribution in the metal which reflects different excitation conditions.

  16. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  17. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  18. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  19. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    NASA Astrophysics Data System (ADS)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  20. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  1. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  2. Electron density and electron temperature measurement in a bi-Maxwellian electron distribution using a derivative method of Langmuir probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Ikjin; Chung, ChinWook; Youn Moon, Se

    2013-08-15

    In plasma diagnostics with a single Langmuir probe, the electron temperature T{sub e} is usually obtained from the slope of the logarithm of the electron current or from the electron energy probability functions of current (I)-voltage (V) curve. Recently, Chen [F. F. Chen, Phys. Plasmas 8, 3029 (2001)] suggested a derivative analysis method to obtain T{sub e} by the ratio between the probe current and the derivative of the probe current at a plasma potential where the ion current becomes zero. Based on this method, electron temperatures and electron densities were measured and compared with those from the electron energymore » distribution function (EEDF) measurement in Maxwellian and bi-Maxwellian electron distribution conditions. In a bi-Maxwellian electron distribution, we found the electron temperature T{sub e} obtained from the method is always lower than the effective temperatures T{sub eff} derived from EEDFs. The theoretical analysis for this is presented.« less

  3. Constraining the interior density profile of a Jovian planet from precision gravity field data

    NASA Astrophysics Data System (ADS)

    Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele

    2017-10-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.

  4. Constraining Saturn's interior density profile from precision gravity field measurement obtained during Grand Finale

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.

    2017-12-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.

  5. Classification and assessment of retrieved electron density maps in coherent X-ray diffraction imaging using multivariate analysis.

    PubMed

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2016-01-01

    Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.

  6. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  7. A quadrature based method of moments for nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin L.; Vedula, Prakash

    2011-09-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.

  8. Stochastic characteristics and Second Law violations of atomic fluids in Couette flow

    NASA Astrophysics Data System (ADS)

    Raghavan, Bharath V.; Karimi, Pouyan; Ostoja-Starzewski, Martin

    2018-04-01

    Using Non-equilibrium Molecular Dynamics (NEMD) simulations, we study the statistical properties of an atomic fluid undergoing planar Couette flow, in which particles interact via a Lennard-Jones potential. We draw a connection between local density contrast and temporal fluctuations in the shear stress, which arise naturally through the equivalence between the dissipation function and entropy production according to the fluctuation theorem. We focus on the shear stress and the spatio-temporal density fluctuations and study the autocorrelations and spectral densities of the shear stress. The bispectral density of the shear stress is used to measure the degree of departure from a Gaussian model and the degree of nonlinearity induced in the system owing to the applied strain rate. More evidence is provided by the probability density function of the shear stress. We use the Information Theory to account for the departure from Gaussian statistics and to develop a more general probability distribution function that captures this broad range of effects. By accounting for negative shear stress increments, we show how this distribution preserves the violations of the Second Law of Thermodynamics observed in planar Couette flow of atomic fluids, and also how it captures the non-Gaussian nature of the system by allowing for non-zero higher moments. We also demonstrate how the temperature affects the band-width of the shear-stress and how the density affects its Power Spectral Density, thus determining the conditions under which the shear-stress acts is a narrow-band or wide-band random process. We show that changes in the statistical characteristics of the parameters of interest occur at a critical strain rate at which an ordering transition occurs in the fluid causing shear thinning and affecting its stability. A critical strain rate of this kind is also predicted by the Loose-Hess stability criterion.

  9. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  10. The H-Function and Probability Density Functions of Certain Algebraic Combinations of Independent Random Variables with H-Function Probability Distribution

    DTIC Science & Technology

    1981-05-01

    functions and the H- function," Boletin do la Academia de Ciencias Fisicas Matematicas v Naturales (Caracas), 31, 95- 102 (1971). 120. Jain, U. C...Society, 37, 329- 334 (1973). 32. Oliver, M. L., and S. L. Kalla, "On the derivative of Fox’s H- function," (Spanish) Acta 14,dlcana de Ciencia -v...34 Universidade de Lisboa Revista de Faculdade de Ciencias FMatematicas, II, Series A, 13, 109-114 (1969-70). 92. Bajpai, S. D., "On some results involving Fox’s H

  11. The influence of surface properties on the plasma dynamics in radio-frequency driven oxygen plasmas: Measurements and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur; Niemi, Kari; O'Connell, Deborah

    2013-12-09

    Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.

  12. Studies on the latitudinal distribution of ground-based geomagnetic pulsations and fluctuations in the interplanetary medium using discrete mathematical analysis methods

    NASA Astrophysics Data System (ADS)

    Zelinsky, N. R.; Kleimenova, N. G.; Malysheva, L. M.

    2014-07-01

    Ground-based geomagnetic Pc5 (2-7 mHz) pulsations, caused by the passage of dense transients (density disturbances) in the solar wind, were analyzed. It was shown that intensive bursts can appear in the density of the solar wind and its fluctuations, up to Np ˜ 30-50 cm3, even during the most magnetically calm year in the past decades (2009). The analysis, performed using one of the latest methods of discrete mathematical analysis (DMA), is presented. The energy functional of a time-series fragment (called "anomaly rectification" in DMA terms) of two such events was calculated. It was established that fluctuations in the dynamic pressure (density) of the solar wind (SW) cause the global excitation of Pc5 geomagnetic pulsations in the daytime sector of the Earth's magnetosphere, i.e., from polar to equatorial latitudes. Such pulsations started and ended suddenly and simultaneously at all latitudes. Fluctuations in the interplanetary magnetic field (IMF) have turned up to be less geoeffective in exciting geomagnetic pulsations than fluctuations in the SW density. The pulsation generation mechanisms in various structural regions of the magnetosphere were probably different. It was therefore concluded that the most probable source of ground-based pulsations are fluctuations of the corresponding periods in the SW density.

  13. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  14. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  15. Mitigating clogging and arrest in confined self-propelled systems

    NASA Astrophysics Data System (ADS)

    Savoie, William; Aguilar, Jeffrey; Monaenkova, Daria; Linevich, Vadim; Goldman, Daniel

    Ensembles of self-propelling elements, like colloidal surfers, bacterial biofilms, and robot swarms can spontaneously form density heterogeneities. To understand how to prevent potentially catastrophic clogs in task-oriented active matter systems (like soil excavating robots), we present a robophysical study of excavation of granular media in a confined environment. We probe the efficacy of two social strategies observed in our studies of fire ants (S. invicta). The first behavior (denoted as unequal workload) prescribes to each excavator a different probability to enter the digging area. The second behavior (denoted as reversal\\x9D), is characterized by a probability to forfeit excavation when progress is sufficiently obstructed. For equal workload distribution and no reversal behavior, clogs at the digging site prevent excavation for sufficient numbers of robots. Measurements of aggregation relaxation times reveal how the strategies mitigate clogs. The unequal workload behavior reduces the tunnel density, decreasing the probability of clog formation. Reversal behavior, while allowing clogs to form, reduces aggregation relaxation time. We posit that application of social behaviors can be useful for swarm robot systems where global control and organization may not be possible.

  16. MaxEnt alternatives to pearson family distributions

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie J.

    2012-05-01

    In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.

  17. Rapid measurement of the three-dimensional distribution of leaf orientation and the leaf angle probability density function using terrestrial LiDAR scanning

    USDA-ARS?s Scientific Manuscript database

    Leaf orientation plays a fundamental role in many transport processes in plant canopies. At the plant or stand level, leaf orientation is often highly anisotropic and heterogeneous, yet most analyses neglect such complexity. In many cases, this is due to the difficulty in measuring the spatial varia...

  18. Stochastic GARCH dynamics describing correlations between stocks

    NASA Astrophysics Data System (ADS)

    Prat-Ortega, G.; Savel'ev, S. E.

    2014-09-01

    The ARCH and GARCH processes have been successfully used for modelling price dynamics such as stock returns or foreign exchange rates. Analysing the long range correlations between stocks, we propose a model, based on the GARCH process, which is able to describe the main characteristics of the stock price correlations, including the mean, variance, probability density distribution and the noise spectrum.

  19. Investigation of MHD flow structure and fluctuations by potassium lineshape fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauman, L.E.

    1993-12-31

    Multiple Potassium D-line emission absorption spectra from a high temperature, coal-fired flow have been fit to a radiative transfer, boundary layer flow model. The results of fitting spectra from the aerodynamic duct of the Department of Energy Coal-Fired Flow Facility provide information about the thickness and shape of the thermal boundary layer and the bulk potassium seed atom density in a simulated magnetohydrodynamic channel flow. Probability distribution functions for the entire set of more than six thousand spectra clearly indicate the typical values and magnitude of fluctuations for the flow: core temperature of 2538 {plus_minus} 20 K, near wall temperaturemore » of 1945 {plus_minus} 135 K, boundary layer width of about 1 cm, and potassium seed atom density of (5.1 {plus_minus} 0.8)x 10{sup 22}/m{sup 3}. Probability distribution functions for selected times during the eight hours of measurements indicate occasional periods of unstable combustion. In addition, broadband particle parameters during the unstable start of the test may be related to differing particle and gas temperatures. The results clearly demonstrate the ability of lineshape fitting to provide valuable data for diagnosing the high speed turbulent flow.« less

  20. Patch-occupancy models indicate human activity as major determinant of forest elephant Loxodonta cyclotis seasonal distribution in an industrial corridor in Gabon

    USGS Publications Warehouse

    Buij, R.; McShea, W.J.; Campbell, P.; Lee, M.E.; Dallmeier, F.; Guimondou, S.; Mackaga, L.; Guisseougou, N.; Mboumba, S.; Hines, J.E.; Nichols, J.D.; Alonso, A.

    2007-01-01

    The importance of human activity and ecological features in influencing African forest elephant ranging behaviour was investigated in the Rabi-Ndogo corridor of the Gamba Complex of Protected Areas in southwest Gabon. Locations in a wide geographical area with a range of environmental variables were selected for patch-occupancy surveys using elephant dung to assess seasonal presence and absence of elephants. Patch-occupancy procedures allowed for covariate modelling evaluating hypotheses for both occupancy in relation to human activity and ecological features, and detection probability in relation to vegetation density. The best fitting models for old and fresh dung data sets indicate that (1) detection probability for elephant dung is negatively related to the relative density of the vegetation, and (2) human activity, such as presence and infrastructure, are more closely associated with elephant distribution patterns than are ecological features, such as the presence of wetlands and preferred fresh fruit. Our findings emphasize the sensitivity of elephants to human disturbance, in this case infrastructure development associated with gas and oil production. Patch-occupancy methodology offers a viable alternative to current transect protocols for monitoring programs with multiple covariates.

  1. Statistical properties of Charney-Hasegawa-Mima zonal flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.

    2015-05-15

    A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less

  2. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.

  3. Benford's law and the FSD distribution of economic behavioral micro data

    NASA Astrophysics Data System (ADS)

    Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George

    2017-11-01

    In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.

  4. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  5. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  6. Refinement of the probability density function model for preferential concentration of aerosol particles in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Zaichik, Leonid I.; Alipchenkov, Vladimir M.

    2007-11-01

    The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.

  7. The 6dFGS Peculiar Velocity Field

    NASA Astrophysics Data System (ADS)

    Springob, Chris M.; Magoulas, C.; Colless, M.; Mould, J.; Erdogdu, P.; Jones, D. H.; Lucey, J.; Campbell, L.; Merson, A.; Jarrett, T.

    2012-01-01

    The 6dF Galaxy Survey (6dFGS) is an all southern sky galaxy survey, including 125,000 redshifts and a Fundamental Plane (FP) subsample of 10,000 peculiar velocities, making it the largest peculiar velocity sample to date. We have fit the FP using a maximum likelihood fit to a tri-variate Gaussian. We subsequently compute a Bayesian probability distribution for every possible peculiar velocity for each of the 10,000 galaxies, derived from the tri-variate Gaussian probability density distribution, accounting for our selection effects and measurement errors. We construct a predicted peculiar velocity field from the 2MASS redshift survey, and compare our observed 6dFGS velocity field to the predicted field. We discuss the resulting agreement between the observed and predicted fields, and the implications for measurements of the bias parameter and bulk flow.

  8. Wave theory of turbulence in compressible media (acoustic theory of turbulence)

    NASA Technical Reports Server (NTRS)

    Kentzer, C. P.

    1975-01-01

    The generation and the transmission of sound in turbulent flows are treated as one of the several aspects of wave propagation in turbulence. Fluid fluctuations are decomposed into orthogonal Fourier components, with five interacting modes of wave propagation: two vorticity modes, one entropy mode, and two acoustic modes. Wave interactions, governed by the inhomogeneous and nonlinear terms of the perturbed Navier-Stokes equations, are modeled by random functions which give the rates of change of wave amplitudes equal to the averaged interaction terms. The statistical framework adopted is a quantum-like formulation in terms of complex distribution functions. The spatial probability distributions are given by the squares of the absolute values of the complex characteristic functions. This formulation results in nonlinear diffusion-type transport equations for the probability densities of the five modes of wave propagation.

  9. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  10. Disturbance frequency and vertical distribution of seeds affect long-term population dynamics: a mechanistic seed bank model.

    PubMed

    Eager, Eric Alan; Haridas, Chirakkal V; Pilson, Diana; Rebarber, Richard; Tenhumberg, Brigitte

    2013-08-01

    Seed banks are critically important for disturbance specialist plants because seeds of these species germinate only in disturbed soil. Disturbance and seed depth affect the survival and germination probability of seeds in the seed bank, which in turn affect population dynamics. We develop a density-dependent stochastic integral projection model to evaluate the effect of stochastic soil disturbances on plant population dynamics with an emphasis on mimicking how disturbances vertically redistribute seeds within the seed bank. We perform a simulation analysis of the effect of the frequency and mean depth of disturbances on the population's quasi-extinction probability, as well as the long-term mean and variance of the total density of seeds in the seed bank. We show that increasing the frequency of disturbances increases the long-term viability of the population, but the relationship between the mean depth of disturbance and the long-term viability of the population are not necessarily monotonic for all parameter combinations. Specifically, an increase in the probability of disturbance increases the long-term viability of the total seed bank population. However, if the probability of disturbance is too low, a shallower mean depth of disturbance can increase long-term viability, a relationship that switches as the probability of disturbance increases. However, a shallow disturbance depth is beneficial only in scenarios with low survival in the seed bank.

  11. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  12. SUPERNOVA DRIVING. II. COMPRESSIVE RATIO IN MOLECULAR-CLOUD TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of themore » compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.« less

  13. Bayesian ionospheric multi-instrument 3D tomography

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Vierinen, Juha; Roininen, Lassi

    2017-04-01

    The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.

  14. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  15. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning.

    PubMed

    Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y

    2007-03-07

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  16. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  17. Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction

    NASA Astrophysics Data System (ADS)

    Ocampo, Manuel S.; Caicedo, Bernardo

    2009-06-01

    Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.

  18. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  19. A statistical-based material and process guidelines for design of carbon nanotube field-effect transistors in gigascale integrated circuits.

    PubMed

    Ghavami, Behnam; Raji, Mohsen; Pedram, Hossein

    2011-08-26

    Carbon nanotube field-effect transistors (CNFETs) show great promise as building blocks of future integrated circuits. However, synthesizing single-walled carbon nanotubes (CNTs) with accurate chirality and exact positioning control has been widely acknowledged as an exceedingly complex task. Indeed, density and chirality variations in CNT growth can compromise the reliability of CNFET-based circuits. In this paper, we present a novel statistical compact model to estimate the failure probability of CNFETs to provide some material and process guidelines for the design of CNFETs in gigascale integrated circuits. We use measured CNT spacing distributions within the framework of detailed failure analysis to demonstrate that both the CNT density and the ratio of metallic to semiconducting CNTs play dominant roles in defining the failure probability of CNFETs. Besides, it is argued that the large-scale integration of these devices within an integrated circuit will be feasible only if a specific range of CNT density with an acceptable ratio of semiconducting to metallic CNTs can be adjusted in a typical synthesis process.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conn, A. R.; Parker, Q. A.; Zucker, D. B.

    In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less

  2. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  3. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  4. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  5. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus

    PubMed Central

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-01-01

    Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455

  6. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus.

    PubMed

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-05-01

    DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.

  7. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  8. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  9. Factors affecting summer distributions of Bering Sea forage fish species: Assessing competing hypotheses

    NASA Astrophysics Data System (ADS)

    Parker-Stetter, Sandra; Urmy, Samuel; Horne, John; Eisner, Lisa; Farley, Edward

    2016-12-01

    Hypotheses on the factors affecting forage fish species distributions are often proposed but rarely evaluated using a comprehensive suite of indices. Using 24 predictor indices, we compared competing hypotheses and calculated average models for the distributions of capelin, age-0 Pacific cod, and age-0 pollock in the eastern Bering Sea from 2006 to 2010. Distribution was described using a two stage modeling approach: probability of occurrence ("presence") and density when fish were present. Both local (varying by location and year) and annual (uniform in space but varying by year) indices were evaluated, the latter accounting for the possibility that distributions were random but that overall presence or densities changed with annual conditions. One regional index, distance to the location of preflexion larvae earlier in the year, was evaluated for age-0 pollock. Capelin distributions were best predicted by local indices such as bottom depth, temperature, and salinity. Annual climate (May sea surface temperature (SST), sea ice extent anomaly) and wind (June wind speed cubed) indices were often important for age-0 Pacific cod in addition to local indices (temperature and depth). Surface, midwater, and water column age-0 pollock distributions were best described by a combination of local (depth, temperature, salinity, zooplankton) and annual (May SST, sea ice anomaly, June wind speed cubed) indices. Our results corroborated some of those in previous distribution studies, but suggested that presence and density may also be influenced by other factors. Even though there were common environmental factors that influenced all species' distributions, it is not possible to generalize conditions for forage fish as a group.

  10. Nest trampling and ground nesting birds: Quantifying temporal and spatial overlap between cattle activity and breeding redshank.

    PubMed

    Sharps, Elwyn; Smart, Jennifer; Mason, Lucy R; Jones, Kate; Skov, Martin W; Garbutt, Angus; Hiddink, Jan G

    2017-08-01

    Conservation grazing for breeding birds needs to balance the positive effects on vegetation structure and negative effects of nest trampling. In the UK, populations of Common redshank Tringa totanus breeding on saltmarshes declined by >50% between 1985 and 2011. These declines have been linked to changes in grazing management. The highest breeding densities of redshank on saltmarshes are found in lightly grazed areas. Conservation initiatives have encouraged low-intensity grazing at <1 cattle/ha, but even these levels of grazing can result in high levels of nest trampling. If livestock distribution is not spatially or temporally homogenous but concentrated where and when redshank breed, rates of nest trampling may be much higher than expected based on livestock density alone. By GPS tracking cattle on saltmarshes and monitoring trampling of dummy nests, this study quantified (i) the spatial and temporal distribution of cattle in relation to the distribution of redshank nesting habitats and (ii) trampling rates of dummy nests. The distribution of livestock was highly variable depending on both time in the season and the saltmarsh under study, with cattle using between 3% and 42% of the saltmarsh extent and spending most their time on higher elevation habitat within 500 m of the sea wall, but moving further onto the saltmarsh as the season progressed. Breeding redshank also nest on these higher elevation zones, and this breeding coincides with the early period of grazing. Probability of nest trampling was correlated to livestock density and was up to six times higher in the areas where redshank breed. This overlap in both space and time of the habitat use of cattle and redshank means that the trampling probability of a nest can be much higher than would be expected based on standard measures of cattle density. Synthesis and applications : Because saltmarsh grazing is required to maintain a favorable vegetation structure for redshank breeding, grazing management should aim to keep livestock away from redshank nesting habitat between mid-April and mid-July when nests are active, through delaying the onset of grazing or introducing a rotational grazing system.

  11. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  12. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  13. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  14. Very High-Frequency (VHF) ionospheric scintillation fading measurements at Lima, Peru

    NASA Technical Reports Server (NTRS)

    Blank, H. A.; Golden, T. S.

    1972-01-01

    During the spring equinox of 1970, scintillating signals at VHF (136.4 MHz) were observed at Lima, Peru. The transmission originated from ATS 3 and was observed through a pair of antennas spaced 1200 feet apart on an east-west baseline. The empirical data were digitized, reduced, and analyzed. The results include amplitude probability density and distribution functions, time autocorrelation functions, cross correlation functions for the spaced antennas, and appropriate spectral density functions. Results show estimates of the statistics of the ground diffraction pattern to gain insight into gross ionospheric irregularity size, and irregularity velocity in the antenna planes.

  15. Application of remote sensing for fishery resources assessment and monitoring. [Gulf of Mexico

    NASA Technical Reports Server (NTRS)

    Savastano, K. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The distribution and abundance of white marlin correlated with the chlorophyll, water temperature, and Secchi depth sea truth measurements. Results of correlation analyses for dolphin were inconclusive. Predicition models for white marlin were developed using stepwise multiple regression and discriminant function analysis techniques which demonstrated a potential for increasing the probability of game fishing success. The S190A and B imagery was density sliced/color enhanced with white marlin location superimposed on the image, but no density/white marlin relationship could be established.

  16. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  17. Gravitational wave hotspots: Ranking potential locations of single-source gravitational wave emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Joseph; Polin, Abigail; Lommen, Andrea

    2014-03-20

    The steadily improving sensitivity of pulsar timing arrays (PTAs) suggests that gravitational waves (GWs) from supermassive black hole binary (SMBHB) systems in the nearby universe will be detectable sometime during the next decade. Currently, PTAs assume an equal probability of detection from every sky position, but as evidence grows for a non-isotropic distribution of sources, is there a most likely sky position for a detectable single source of GWs? In this paper, a collection of Galactic catalogs is used to calculate various metrics related to the detectability of a single GW source resolvable above a GW background, assuming that everymore » galaxy has the same probability of containing an SMBHB. Our analyses of these data reveal small probabilities that one of these sources is currently in the PTA band, but as sensitivity is improved regions of consistent probability density are found in predictable locations, specifically around local galaxy clusters.« less

  18. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.

    PubMed

    Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  19. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  20. Large scale IRAM 30 m CO-observations in the giant molecular cloud complex W43

    NASA Astrophysics Data System (ADS)

    Carlhoff, P.; Nguyen Luong, Q.; Schilke, P.; Motte, F.; Schneider, N.; Beuther, H.; Bontemps, S.; Heitsch, F.; Hill, T.; Kramer, C.; Ossenkopf, V.; Schuller, F.; Simon, R.; Wyrowski, F.

    2013-12-01

    We aim to fully describe the distribution and location of dense molecular clouds in the giant molecular cloud complex W43. It was previously identified as one of the most massive star-forming regions in our Galaxy. To trace the moderately dense molecular clouds in the W43 region, we initiated W43-HERO, a large program using the IRAM 30 m telescope, which covers a wide dynamic range of scales from 0.3 to 140 pc. We obtained on-the-fly-maps in 13CO (2-1) and C18O (2-1) with a high spectral resolution of 0.1 km s-1 and a spatial resolution of 12''. These maps cover an area of ~1.5 square degrees and include the two main clouds of W43 and the lower density gas surrounding them. A comparison to Galactic models and previous distance calculations confirms the location of W43 near the tangential point of the Scutum arm at approximately 6 kpc from the Sun. The resulting intensity cubes of the observed region are separated into subcubes, which are centered on single clouds and then analyzed in detail. The optical depth, excitation temperature, and H2 column density maps are derived out of the 13CO and C18O data. These results are then compared to those derived from Herschel dust maps. The mass of a typical cloud is several 104 M⊙ while the total mass in the dense molecular gas (>102 cm-3) in W43 is found to be ~1.9 × 106 M⊙. Probability distribution functions obtained from column density maps derived from molecular line data and Herschel imaging show a log-normal distribution for low column densities and a power-law tail for high densities. A flatter slope for the molecular line data probability distribution function may imply that those selectively show the gravitationally collapsing gas. Appendices are available in electronic form at http://www.aanda.orgThe final datacubes (13CO and C18O) for the entire survey are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A24

  1. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  2. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  3. Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1995-11-01

    Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.

  4. Assessment of Template-Based Modeling of Protein Structure in CASP11

    PubMed Central

    Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.

    2016-01-01

    We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927

  5. The 2-10 keV unabsorbed luminosity function of AGN from the LSS, CDFS, and COSMOS surveys

    NASA Astrophysics Data System (ADS)

    Ranalli, P.; Koulouridis, E.; Georgantopoulos, I.; Fotopoulou, S.; Hsu, L.-T.; Salvato, M.; Comastri, A.; Pierre, M.; Cappelluti, N.; Carrera, F. J.; Chiappetti, L.; Clerc, N.; Gilli, R.; Iwasawa, K.; Pacaud, F.; Paltani, S.; Plionis, E.; Vignali, C.

    2016-05-01

    The XMM-Large scale structure (XMM-LSS), XMM-Cosmological evolution survey (XMM-COSMOS), and XMM-Chandra deep field south (XMM-CDFS) surveys are complementary in terms of sky coverage and depth. Together, they form a clean sample with the least possible variance in instrument effective areas and point spread function. Therefore this is one of the best samples available to determine the 2-10 keV luminosity function of active galactic nuclei (AGN) and their evolution. The samples and the relevant corrections for incompleteness are described. A total of 2887 AGN is used to build the LF in the luminosity interval 1042-1046 erg s-1 and in the redshift interval 0.001-4. A new method to correct for absorption by considering the probability distribution for the column density conditioned on the hardness ratio is presented. The binned luminosity function and its evolution is determined with a variant of the Page-Carrera method, which is improved to include corrections for absorption and to account for the full probability distribution of photometric redshifts. Parametric models, namely a double power law with luminosity and density evolution (LADE) or luminosity-dependent density evolution (LDDE), are explored using Bayesian inference. We introduce the Watanabe-Akaike information criterion (WAIC) to compare the models and estimate their predictive power. Our data are best described by the LADE model, as hinted by the WAIC indicator. We also explore the recently proposed 15-parameter extended LDDE model and find that this extension is not supported by our data. The strength of our method is that it provides unabsorbed, non-parametric estimates, credible intervals for luminosity function parameters, and a model choice based on predictive power for future data. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA.Tables with the samples of the posterior probability distributions are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A80

  6. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  7. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  8. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE PAGES

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; ...

    2017-12-07

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  9. Site specific passive acoustic detection and densities of humpback whale calls off the coast of California

    NASA Astrophysics Data System (ADS)

    Helble, Tyler Adam

    Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.

  10. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  11. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    NASA Astrophysics Data System (ADS)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.

  12. Public opinion by a poll process: model study and Bayesian view

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Kim, Yong Woon

    2018-05-01

    We study the formation of public opinion in a poll process where the current score is open to the public. The voters are assumed to vote probabilistically for or against their own preference considering the group opinion collected up to then in the score. The poll-score probability is found to follow the beta distribution in the large polls limit. We demonstrate that various poll results, even those contradictory to the population preference, are possible with non-zero probability density and that such deviations are readily triggered by initial bias. It is mentioned that our poll model can be understood in the Bayesian viewpoint.

  13. Non-renewal statistics for electron transport in a molecular junction with electron-vibration interaction

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2017-09-01

    Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.

  14. Shade tree spatial structure and pod production explain frosty pod rot intensity in cacao agroforests, Costa Rica.

    PubMed

    Gidoin, Cynthia; Avelino, Jacques; Deheuvels, Olivier; Cilas, Christian; Bieng, Marie Ange Ngo

    2014-03-01

    Vegetation composition and plant spatial structure affect disease intensity through resource and microclimatic variation effects. The aim of this study was to evaluate the independent effect and relative importance of host composition and plant spatial structure variables in explaining disease intensity at the plot scale. For that purpose, frosty pod rot intensity, a disease caused by Moniliophthora roreri on cacao pods, was monitored in 36 cacao agroforests in Costa Rica in order to assess the vegetation composition and spatial structure variables conducive to the disease. Hierarchical partitioning was used to identify the most causal factors. Firstly, pod production, cacao tree density and shade tree spatial structure had significant independent effects on disease intensity. In our case study, the amount of susceptible tissue was the most relevant host composition variable for explaining disease intensity by resource dilution. Indeed, cacao tree density probably affected disease intensity more by the creation of self-shading rather than by host dilution. Lastly, only regularly distributed forest trees, and not aggregated or randomly distributed forest trees, reduced disease intensity in comparison to plots with a low forest tree density. A regular spatial structure is probably crucial to the creation of moderate and uniform shade as recommended for frosty pod rot management. As pod production is an important service expected from these agroforests, shade tree spatial structure may be a lever for integrated management of frosty pod rot in cacao agroforests.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  16. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  17. Fluctuations and intermittent poloidal transport in a simple toroidal plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goud, T. S.; Ganesh, R.; Saxena, Y. C.

    In a simple magnetized toroidal plasma, fluctuation induced poloidal flux is found to be significant in magnitude. The probability distribution function of the fluctuation induced poloidal flux is observed to be strongly non-Gaussian in nature; however, in some cases, the distribution shows good agreement with the analytical form [Carreras et al., Phys. Plasmas 3, 2664 (1996)], assuming a coupling between the near Gaussian density and poloidal velocity fluctuations. The observed non-Gaussian nature of the fluctuation induced poloidal flux and other plasma parameters such as density and fluctuating poloidal velocity in this device is due to intermittent and bursty nature ofmore » poloidal transport. In the simple magnetized torus used here, such an intermittent fluctuation induced poloidal flux is found to play a crucial role in generating the poloidal flow.« less

  18. Spatial distribution of nuclei in progressive nucleation: Modeling and application

    NASA Astrophysics Data System (ADS)

    Tomellini, Massimo

    2018-04-01

    Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.

  19. A compound scattering pdf for the ultrasonic echo envelope and its relationship to K and Nakagami distributions.

    PubMed

    Shankar, P Mohana

    2003-03-01

    A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.

  20. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  1. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  2. Quantifying Uncertainties in the Thermo-Mechanical Properties of Particulate Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Murthy, Pappu L. N.

    1999-01-01

    The present paper reports results from a computational simulation of probabilistic particulate reinforced composite behavior. The approach consists use of simplified micromechanics of particulate reinforced composites together with a Fast Probability Integration (FPI) technique. Sample results are presented for a Al/SiC(sub p)(silicon carbide particles in aluminum matrix) composite. The probability density functions for composite moduli, thermal expansion coefficient and thermal conductivities along with their sensitivity factors are computed. The effect of different assumed distributions and the effect of reducing scatter in constituent properties on the thermal expansion coefficient are also evaluated. The variations in the constituent properties that directly effect these composite properties are accounted for by assumed probabilistic distributions. The results show that the present technique provides valuable information about the scatter in composite properties and sensitivity factors, which are useful to test or design engineers.

  3. Examples of measurement uncertainty evaluations in accordance with the revised GUM

    NASA Astrophysics Data System (ADS)

    Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.

    2016-11-01

    The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.

  4. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  5. Effects of communication burstiness on consensus formation and tipping points in social dynamics

    NASA Astrophysics Data System (ADS)

    Doyle, C.; Szymanski, B. K.; Korniss, G.

    2017-06-01

    Current models for opinion dynamics typically utilize a Poisson process for speaker selection, making the waiting time between events exponentially distributed. Human interaction tends to be bursty though, having higher probabilities of either extremely short waiting times or long periods of silence. To quantify the burstiness effects on the dynamics of social models, we place in competition two groups exhibiting different speakers' waiting-time distributions. These competitions are implemented in the binary naming game and show that the relevant aspect of the waiting-time distribution is the density of the head rather than that of the tail. We show that even with identical mean waiting times, a group with a higher density of short waiting times is favored in competition over the other group. This effect remains in the presence of nodes holding a single opinion that never changes, as the fraction of such committed individuals necessary for achieving consensus decreases dramatically when they have a higher head density than the holders of the competing opinion. Finally, to quantify differences in burstiness, we introduce the expected number of small-time activations and use it to characterize the early-time regime of the system.

  6. Extracting the distribution of laser damage precursors on fused silica surfaces for 351 nm, 3 ns laser pulses at high fluences (20-150 J/cm2).

    PubMed

    Laurence, Ted A; Bude, Jeff D; Ly, Sonny; Shen, Nan; Feit, Michael D

    2012-05-07

    Surface laser damage limits the lifetime of optics for systems guiding high fluence pulses, particularly damage in silica optics used for inertial confinement fusion-class lasers (nanosecond-scale high energy pulses at 355 nm/3.5 eV). The density of damage precursors at low fluence has been measured using large beams (1-3 cm); higher fluences cannot be measured easily since the high density of resulting damage initiation sites results in clustering. We developed automated experiments and analysis that allow us to damage test thousands of sites with small beams (10-30 µm), and automatically image the test sites to determine if laser damage occurred. We developed an analysis method that provides a rigorous connection between these small beam damage test results of damage probability versus laser pulse energy and the large beam damage results of damage precursor densities versus fluence. We find that for uncoated and coated fused silica samples, the distribution of precursors nearly flattens at very high fluences, up to 150 J/cm2, providing important constraints on the physical distribution and nature of these precursors.

  7. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    PubMed

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  8. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    DOE PAGES

    Clerkin, L.; Kirk, D.; Manera, M.; ...

    2016-08-30

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less

  9. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    NASA Astrophysics Data System (ADS)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2017-04-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  10. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  11. Spectrocopic measurements of water vapor plasmas at high resolution: The optical transition probabilities for OH (A 2 Sigma - X 2 Pi)

    NASA Technical Reports Server (NTRS)

    Klein, L.

    1972-01-01

    Emission and absorption spectra of water vapor plasmas generated in a wall-stabilized arc at atmospheric pressure and 4 current, and at 0.03 atm and 15 to 50 A, were measured at high spatial and spectral resolution. The gas temperature was determined from the shape of Doppler-broadened rotational lines of OH. The observed nonequilibrium population distributions over the energy levels of atoms are interpreted in terms of a theoretical state model for diffusion-controlled arc plasmas. Excellent correlation is achieved between measured and predicted occupation of hydrogen energy levels. It is shown that the population distribution over the nonpredissociating rotational-vibrational levels of the A 2 Sigma state of OH is close to an equilibrium distribution at the gas temperature, although the total density of this state is much higher than its equilibrium density. The reduced intensities of the rotational lines originating in these levels yielded Boltzmann plots that were strictly linear.

  12. High-energy Electron Scattering and the Charge Distributions of Selected Nuclei

    DOE R&D Accomplishments Database

    Hahn, B.; Ravenhall, D. G.; Hofstadter, R.

    1955-10-01

    Experimental results are presented of electron scattering by Ca, V, Co, In, Sb, Hf, Ta, W, Au, Bi, Th, and U, at 183 Mev and (for some of the elements) at 153 Mev. For those nuclei for which asphericity and inelastic scattering are absent or unimportant, i.e., Ca, V, Co, In, Sb, Au, and Bi, a partial wave analysis of the Dirac equation has been performed in which the nuclei are represented by static, spherically symmetric charge distributions. Smoothed uniform charge distributions have been assumed; these are characterized by a constant charge density in the central region of the nucleus, with a smoothed-our surface. Essentially two parameters can be determined, related to the radium and to the surface thickness. An examination of the Au experiments show that the functional forms of the surface are not important, and that the charge density in the central regions is probably fairly flat, although it cannot be determined very accurately.

  13. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  14. Image-based 3D modeling study of the influence of vessel density and blood hemoglobin concentration on tumor oxygenation and response to irradiation.

    PubMed

    Lagerlöf, Jakob H; Kindblom, Jon; Cortez, Eliane; Pietras, Kristian; Bernhardt, Peter

    2013-02-01

    Hypoxia is one of the most important factors influencing clinical outcome after radiotherapy. Improved knowledge of factors affecting the levels and distribution of oxygen within a tumor is needed. The authors constructed a theoretical 3D model based on histological images to analyze the influence of vessel density and hemoglobin (Hb) concentration on the response to irradiation. The pancreases of a Rip-Tag2 mouse, a model of malignant insulinoma, were excised, cryosectioned, immunostained, and photographed. Vessels were identified by image thresholding and a 3D vessel matrix assembled. The matrix was reduced to functional vessel segments and enlarged by replication. The steady-state oxygen tension field of the tumor was calculated by iteratively employing Green's function method for diffusion and the Michaelis-Menten model for consumption. The impact of vessel density on the radiation response was studied by removing a number of randomly selected vessels. The impact of Hb concentration was studied by independently changing vessel oxygen partial pressure (pO(2)). For each oxygen distribution, the oxygen enhancement ratio (OER) was calculated and the mean absorbed dose at which the tumor control probability (TCP) was 0.99 (D(99)) was determined using the linear-quadratic cell survival model (LQ model). Decreased pO(2) shifted the oxygen distribution to lower values, whereas decreased vessel density caused the distribution to widen and shift to lower values. Combined scenarios caused lower-shifted distributions, emphasising log-normal characteristics. Vessel reduction combined with increased blood pO(2) caused the distribution to widen due to a lack of vessels. The most pronounced radiation effect of increased pO(2) occurred with tumor tissue with 50% of the maximum vessel density used in the simulations. A 51% decrease in D(99), from 123 to 60 Gy, was found between the lowest and highest pO(2) concentrations. Our results indicate that an intermediate vascular density region exists where enhanced blood oxygen concentration may be beneficial for radiation response. The results also suggest that it is possible to distinguish between diffusion-limited and anemic hypoxia from the characteristics of the pO(2) distribution.

  15. Parasitism alters three power laws of scaling in a metazoan community: Taylor’s law, density-mass allometry, and variance-mass allometry

    PubMed Central

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E.

    2015-01-01

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor’s law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution. PMID:25550506

  16. Parasitism alters three power laws of scaling in a metazoan community: Taylor's law, density-mass allometry, and variance-mass allometry.

    PubMed

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E

    2015-02-10

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor's law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution.

  17. Quantifying the interplay between gravity and magnetic field in molecular clouds - a possible multiscale energy equipartition in NGC 6334

    NASA Astrophysics Data System (ADS)

    Li, Guang-Xing; Burkert, Andreas

    2018-02-01

    The interplay between gravity, turbulence and the magnetic field determines the evolution of the molecular interstellar medium (ISM) and the formation of the stars. In spite of growing interests, there remains a lack of understanding of the importance of magnetic field over multiple scales. We derive the magnetic energy spectrum - a measure that constraints the multiscale distribution of the magnetic energy, and compare it with the gravitational energy spectrum derived in Li & Burkert. In our formalism, the gravitational energy spectrum is purely determined by the surface density probability density distribution (PDF), and the magnetic energy spectrum is determined by both the surface density PDF and the magnetic field-density relation. If regions have density PDFs close to P(Σ) ˜ Σ-2 and a universal magnetic field-density relation B ˜ ρ1/2, we expect a multiscale near equipartition between gravity and the magnetic fields. This equipartition is found to be true in NGC 6334, where estimates of magnetic fields over multiple scales (from 0.1 pc to a few parsec) are available. However, the current observations are still limited in sample size. In the future, it is necessary to obtain multiscale measurements of magnetic fields from different clouds with different surface density PDFs and apply our formalism to further study the gravity-magnetic field interplay.

  18. Predicting above-ground density and distribution of small mammal prey species at large spatial scales

    PubMed Central

    2017-01-01

    Grassland and shrub-steppe ecosystems are increasingly threatened by anthropogenic activities. Loss of native habitats may negatively impact important small mammal prey species. Little information, however, is available on the impact of habitat variability on density of small mammal prey species at broad spatial scales. We examined the relationship between small mammal density and remotely-sensed environmental covariates in shrub-steppe and grassland ecosystems in Wyoming, USA. We sampled four sciurid and leporid species groups using line transect methods, and used hierarchical distance-sampling to model density in response to variation in vegetation, climate, topographic, and anthropogenic variables, while accounting for variation in detection probability. We created spatial predictions of each species’ density and distribution. Sciurid and leporid species exhibited mixed responses to vegetation, such that changes to native habitat will likely affect prey species differently. Density of white-tailed prairie dogs (Cynomys leucurus), Wyoming ground squirrels (Urocitellus elegans), and leporids correlated negatively with proportion of shrub or sagebrush cover and positively with herbaceous cover or bare ground, whereas least chipmunks showed a positive correlation with shrub cover and a negative correlation with herbaceous cover. Spatial predictions from our models provide a landscape-scale metric of above-ground prey density, which will facilitate the development of conservation plans for these taxa and their predators at spatial scales relevant to management. PMID:28520757

  19. Disordered cellular automaton traffic flow model: phase separated state, density waves and self organized criticality

    NASA Astrophysics Data System (ADS)

    Fourrate, K.; Loulidi, M.

    2006-01-01

    We suggest a disordered traffic flow model that captures many features of traffic flow. It is an extension of the Nagel-Schreckenberg (NaSch) stochastic cellular automata for single line vehicular traffic model. It incorporates random acceleration and deceleration terms that may be greater than one unit. Our model leads under its intrinsic dynamics, for high values of braking probability pr, to a constant flow at intermediate densities without introducing any spatial inhomogeneities. For a system of fast drivers pr→0, the model exhibits a density wave behavior that was observed in car following models with optimal velocity. The gap of the disordered model we present exhibits, for high values of pr and random deceleration, at a critical density, a power law distribution which is a hall mark of a self organized criticality phenomena.

  20. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  1. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  2. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  3. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  4. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  5. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  6. Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.

    2015-06-01

    We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29

  7. Predicting structures in the Zone of Avoidance

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan

    2017-11-01

    The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.

  8. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  9. Large eddy simulation of turbulent premixed combustion using tabulated detailed chemistry and presumed probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin

    2016-03-01

    A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.

  10. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  11. Impact of climatic change on the northern latitude limit and population density of the disease-transmitting European tick Ixodes ricinus.

    PubMed Central

    Lindgren, E; Tälleklint, L; Polfeldt, T

    2000-01-01

    We examined whether a reported northward expansion of the geographic distribution limit of the disease-transmitting tick Ixodes ricinus and an increased tick density between the early 1980s and mid-1990s in Sweden was related to climatic changes. The annual number of days with minimum temperatures above vital bioclimatic thresholds for the tick's life-cycle dynamics were related to tick density in both the early 1980s and the mid-1990s in 20 districts in central and northern Sweden. The winters were markedly milder in all of the study areas in the 1990s as compared to the 1980s. Our results indicate that the reported northern shift in the distribution limit of ticks is related to fewer days during the winter seasons with low minimum temperatures, i.e., below -12 degrees C. At high latitudes, low winter temperatures had the clearest impact on tick distribution. Further south, a combination of mild winters (fewer days with minimum temperatures below -7 degrees C) and extended spring and autumn seasons (more days with minimum temperatures from 5 to 8 degrees C) was related to increases in tick density. We conclude that the relatively mild climate of the 1990s in Sweden is probably one of the primary reasons for the observed increase of density and geographic range of I. ricinus ticks. Images Figure 1 Figure 2 Figure 3 PMID:10656851

  12. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  13. Dielectric response in Bloch’s hydrodynamic model of an electron-ion plasma

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Felderhof, B. U.

    The linear response of an electron-ion plasma to an applied oscillating electric field is studied within the framework of Bloch’s classical hydrodynamic model. The ions are assumed to be fixed in space and distributed according to a known probability distribution. The linearized equations of motion for electron density and flow velocity are studied with the aid of a multiple scattering analysis and cluster expansion. This allows systematic reduction of the many-ion problem to a composition of few-ion problems, and shows how the longitudinal dielectric response function can in principle be calculated.

  14. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  15. Analysis of Muon Induced Neutrons in Detecting High Z Nuclear Materials

    DTIC Science & Technology

    2015-03-01

    mass distributions, delayed fission probabilities, and prompt to delayed fission ratios [16]. 10 2.3 Muon Catalyzed Fusion Fusion occurs when two light ...proton number; A is the atomic mass; ⇢ is the material density; = v/c where v is the velocity of the particle and c is the speed of light ; is the...8217) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 81 % Combine all neutron events time stamps into one vector %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% timeindex of

  16. On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers

    DTIC Science & Technology

    1989-05-01

    suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR

  17. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  18. Distribution of tsunami interevent times

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2008-01-01

    The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.

  19. Probability of lek collapse is lower inside sage-grouse Core Areas: Effectiveness of conservation policy for a landscape species.

    PubMed

    Spence, Emma Suzuki; Beck, Jeffrey L; Gregory, Andrew J

    2017-01-01

    Greater sage-grouse (Centrocercus urophasianus) occupy sagebrush (Artemisia spp.) habitats in 11 western states and 2 Canadian provinces. In September 2015, the U.S. Fish and Wildlife Service announced the listing status for sage-grouse had changed from warranted but precluded to not warranted. The primary reason cited for this change of status was that the enactment of new regulatory mechanisms was sufficient to protect sage-grouse populations. One such plan is the 2008, Wyoming Sage Grouse Executive Order (SGEO), enacted by Governor Freudenthal. The SGEO identifies "Core Areas" that are to be protected by keeping them relatively free from further energy development and limiting other forms of anthropogenic disturbances near active sage-grouse leks. Using the Wyoming Game and Fish Department's sage-grouse lek count database and the Wyoming Oil and Gas Conservation Commission database of oil and gas well locations, we investigated the effectiveness of Wyoming's Core Areas, specifically: 1) how well Core Areas encompass the distribution of sage-grouse in Wyoming, 2) whether Core Area leks have a reduced probability of lek collapse, and 3) what, if any, edge effects intensification of oil and gas development adjacent to Core Areas may be having on Core Area populations. Core Areas contained 77% of male sage-grouse attending leks and 64% of active leks. Using Bayesian binomial probability analysis, we found an average 10.9% probability of lek collapse in Core Areas and an average 20.4% probability of lek collapse outside Core Areas. Using linear regression, we found development density outside Core Areas was related to the probability of lek collapse inside Core Areas. Specifically, probability of collapse among leks >4.83 km from inside Core Area boundaries was significantly related to well density within 1.61 km (1-mi) and 4.83 km (3-mi) outside of Core Area boundaries. Collectively, these data suggest that the Wyoming Core Area Strategy has benefited sage-grouse and sage-grouse habitat conservation; however, additional guidelines limiting development densities adjacent to Core Areas may be necessary to effectively protect Core Area populations.

  20. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  1. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  2. Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, Rachel; Young, Katherine

    Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less

  3. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  4. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  5. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  6. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  7. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  8. Modelling the spatial distribution of Fasciola hepatica in dairy cattle in Europe.

    PubMed

    Ducheyne, Els; Charlier, Johannes; Vercruysse, Jozef; Rinaldi, Laura; Biggeri, Annibale; Demeler, Janina; Brandt, Christina; De Waal, Theo; Selemetas, Nikolaos; Höglund, Johan; Kaba, Jaroslaw; Kowalczyk, Slawomir J; Hendrickx, Guy

    2015-03-26

    A harmonized sampling approach in combination with spatial modelling is required to update current knowledge of fasciolosis in dairy cattle in Europe. Within the scope of the EU project GLOWORM, samples from 3,359 randomly selected farms in 849 municipalities in Belgium, Germany, Ireland, Poland and Sweden were collected and their infection status assessed using an indirect bulk tank milk (BTM) enzyme-linked immunosorbent assay (ELISA). Dairy farms were considered exposed when the optical density ratio (ODR) exceeded the 0.3 cut-off. Two ensemble-modelling techniques, Random Forests (RF) and Boosted Regression Trees (BRT), were used to obtain the spatial distribution of the probability of exposure to Fasciola hepatica using remotely sensed environmental variables (1-km spatial resolution) and interpolated values from meteorological stations as predictors. The median ODRs amounted to 0.31, 0.12, 0.54, 0.25 and 0.44 for Belgium, Germany, Ireland, Poland and southern Sweden, respectively. Using the 0.3 threshold, 571 municipalities were categorized as positive and 429 as negative. RF was seen as capable of predicting the spatial distribution of exposure with an area under the receiver operation characteristic (ROC) curve (AUC) of 0.83 (0.96 for BRT). Both models identified rainfall and temperature as the most important factors for probability of exposure. Areas of high and low exposure were identified by both models, with BRT better at discriminating between low-probability and high-probability exposure; this model may therefore be more useful in practise. Given a harmonized sampling strategy, it should be possible to generate robust spatial models for fasciolosis in dairy cattle in Europe to be used as input for temporal models and for the detection of deviations in baseline probability. Further research is required for model output in areas outside the eco-climatic range investigated.

  9. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    PubMed Central

    Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.

    2014-01-01

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685

  10. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  11. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  12. Distribution of distances between DNA barcode labels in nanochannels close to the persistence length

    NASA Astrophysics Data System (ADS)

    Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.

    2015-02-01

    We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.

  13. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  14. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.

    PubMed

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-02-14

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).

  15. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise

    PubMed Central

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-01-01

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590

  16. Polarization effects on quantum levels in InN/GaN quantum wells.

    PubMed

    Lin, Wei; Li, Shuping; Kang, Junyong

    2009-12-02

    Polarization effects on quantum states in InN/GaN quantum wells have been investigated by means of ab initio calculation and spectroscopic ellipsometry. Through the position-dependent partial densities of states, our results show that the polarization modified by the strain with different well thickness leads to an asymmetry band bending of the quantum well. The quantum levels are identified via the band structures and their square wave function distributions are analyzed by the partial charge densities. Further theoretical and experimental comparison of the imaginary part of the dielectric function show that the overall transition probability increases under larger polarization fields, which can be attributable to the fact that the excited quantum states of 2h have a greater overlap with 1e states and enhance other hole quantum states in the well by a hybridization. These results would provide a new approach to improve the transition probability and light emission by enhancing the polarization fields in a proper way.

  17. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  18. In vivo NMR imaging of sodium-23 in the human head.

    PubMed

    Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K

    1985-01-01

    We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.

  19. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  20. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  1. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  2. Chaotic attractors and physical measures for some density dependent Leslie population models

    NASA Astrophysics Data System (ADS)

    Ugarcovici, Ilie; Weiss, Howard

    2007-12-01

    Following ecologists' discoveries, mathematicians have begun studying extensions of the ubiquitous age structured Leslie population model that allow some survival probabilities and/or fertility rates to depend on population densities. These nonlinear extensions commonly exhibit very complicated dynamics: through computer studies, some authors have discovered robust Hénon-like strange attractors in several families. Population biologists and demographers frequently wish to average a function over many generations and conclude that the average is independent of the initial population distribution. This type of 'ergodicity' seems to be a fundamental tenet in population biology. In this paper we develop the first rigorous ergodic theoretic framework for density dependent Leslie population models. We study two generation models with Ricker and Hassell (recruitment type) fertility terms. We prove that for some parameter regions these models admit a chaotic (ergodic) attractor which supports a unique physical probability measure. This physical measure, having full Lebesgue measure basin, satisfies in the strongest possible sense the population biologist's requirement for ergodicity in their population models. We use the celebrated work of Wang and Young 2001 Commun. Math. Phys. 218 1-97, and our results are the first applications of their method to biology, ecology or demography.

  3. Occupancy and abundance of the endangered yellowcheek darter in Arkansas

    USGS Publications Warehouse

    Magoulick, Daniel D.; Lynch, Dustin T.

    2015-01-01

    The Yellowcheek Darter (Etheostoma moorei) is a rare fish endemic to the Little Red River watershed in the Boston Mountains of northern Arkansas. Remaining populations of this species are geographically isolated and declining, and the species was listed in 2011 as federally endangered. Populations have declined, in part, due to intense seasonal stream drying and inundation of lower reaches by a reservoir. We used a kick seine sampling approach to examine distribution and abundance of Yellowcheek Darter populations in the Middle Fork and South Fork Little Red River. We used presence data to estimate occupancy rates and detection probability and examined relationships between Yellowcheek Darter density and environmental variables. The species was found at five Middle Fork and South Fork sites where it had previously been present in 2003–2004. Occupancy rates were >0.6 but with wide 95% CI, and where the darters occurred, densities were typical of other Ozark darters but highly variable. Detection probability and density were positively related to current velocity. Given that stream drying has become more extreme over the past 30 years and anthropogenic threats have increased, regular monitoring and active management may be required to reduce extinction risk of Yellowcheek Darter populations.

  4. Optoelectronics of inverted type-I CdS/CdSe core/crown quantum ring

    NASA Astrophysics Data System (ADS)

    Bose, Sumanta; Fan, Weijun; Zhang, Dao Hua

    2017-10-01

    Inverted type-I heterostructure core/crown quantum rings (QRs) are quantum-efficient luminophores, whose spectral characteristics are highly tunable. Here, we study the optoelectronic properties of type-I core/crown CdS/CdSe QRs in the zincblende phase—over contrasting lateral size and crown width. For this, we inspect their strain profiles, transition energies, transition matrix elements, spatial charge densities, electronic bandstructures, band-mixing probabilities, optical gain spectra, maximum optical gains, and differential optical gains. Our framework uses an effective-mass envelope function theory based on the 8-band k ṡ p method employing the valence force field model for calculating the atomic strain distributions. The gain calculations are based on the density-matrix equation and take into consideration the excitonic effects with intraband scattering. Variations in the QR lateral size and relative widths of core and crown (ergo the composition) affect their energy levels, band-mixing probabilities, optical transition matrix elements, emission wavelengths/intensities, etc. The optical gain of QRs is also strongly dimension and composition dependent with further dependency on the injection carrier density causing the band-filling effect. They also affect the maximum and differential gain at varying dimensions and compositions.

  5. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  6. Unraveling hadron structure with generalized parton distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less

  7. Evolution of the microstructure during the process of consolidation and bonding in soft granular solids.

    PubMed

    Yohannes, B; Gonzalez, M; Abebe, A; Sprockel, O; Nikfar, F; Kiang, S; Cuitiño, A M

    2016-04-30

    The evolution of microstructure during powder compaction process was investigated using a discrete particle modeling, which accounts for particle size distribution and material properties, such as plasticity, elasticity, and inter-particle bonding. The material properties were calibrated based on powder compaction experiments and validated based on tensile strength test experiments for lactose monohydrate and microcrystalline cellulose, which are commonly used excipient in pharmaceutical industry. The probability distribution function and the orientation of contact forces were used to study the evolution of the microstructure during the application of compaction pressure, unloading, and ejection of the compact from the die. The probability distribution function reveals that the compression contact forces increase as the compaction force increases (or the relative density increases), while the maximum value of the tensile contact forces remains the same. During unloading of the compaction pressure, the distribution approaches a normal distribution with a mean value of zero. As the contact forces evolve, the anisotropy of the powder bed also changes. Particularly, during loading, the compression contact forces are aligned along the direction of the compaction pressure, whereas the tensile contact forces are oriented perpendicular to direction of the compaction pressure. After ejection, the contact forces become isotropic. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  9. Transport of polar and non-polar solvents through a carbon nanotube

    NASA Astrophysics Data System (ADS)

    Chopra, Manish; Phatak, Rohan; Choudhury, N.

    2013-02-01

    Transport of water through narrow pores is important in chemistry, biology and material science. In this work, we employ atomistic molecular dynamics (MD) simulations to carry out a comparative study of the transport of a polar and a non-polar solvent through a carbon nanotube (CNT). The flow of water as well as methane through the nanotube is estimated in terms of number of translocation events and is compared. Transport events occurred in bursts of unidirectional translocation pulses in both the cases. Probability density and cumulative probability distribution functions are obtained for the translocated particles and particles coming out from same side with respect to the time they spent in the nano channel.

  10. Determination of the mass of globular cluster X-ray sources

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.

    1984-01-01

    The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.

  11. Weak Measurement and Quantum Smoothing of a Superconducting Qubit

    NASA Astrophysics Data System (ADS)

    Tan, Dian

    In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.

  12. Acid Hydrolysis and Molecular Density of Phytoglycogen and Liver Glycogen Helps Understand the Bonding in Glycogen α (Composite) Particles

    PubMed Central

    Powell, Prudence O.; Sullivan, Mitchell A.; Sheehy, Joshua J.; Schulz, Benjamin L.; Warren, Frederick J.; Gilbert, Robert G.

    2015-01-01

    Phytoglycogen (from certain mutant plants) and animal glycogen are highly branched glucose polymers with similarities in structural features and molecular size range. Both appear to form composite α particles from smaller β particles. The molecular size distribution of liver glycogen is bimodal, with distinct α and β components, while that of phytoglycogen is monomodal. This study aims to enhance our understanding of the nature of the link between liver-glycogen β particles resulting in the formation of large α particles. It examines the time evolution of the size distribution of these molecules during acid hydrolysis, and the size dependence of the molecular density of both glucans. The monomodal distribution of phytoglycogen decreases uniformly in time with hydrolysis, while with glycogen, the large particles degrade significantly more quickly. The size dependence of the molecular density shows qualitatively different shapes for these two types of molecules. The data, combined with a quantitative model for the evolution of the distribution during degradation, suggest that the bonding between β into α particles is different between phytoglycogen and liver glycogen, with the formation of a glycosidic linkage for phytoglycogen and a covalent or strong non-covalent linkage, most probably involving a protein, for glycogen as most likely. This finding is of importance for diabetes, where α-particle structure is impaired. PMID:25799321

  13. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  14. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damageinto Nuclear Sub-Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costes Sylvain V., Ponomarev Artem, Chen James L.; Nguyen, David; Cucinotta, Francis A.

    2007-08-03

    Several proteins involved in the response to DNA doublestrand breaks (DSB) f orm microscopically visible nuclear domains, orfoci, after exposure to ionizing radiation. Radiation-induced foci (RIF)are believed to be located where DNA damage occurs. To test thisassumption, we analyzed the spatial distribution of 53BP1, phosphorylatedATM, and gammaH2AX RIF in cells irradiated with high linear energytransfer (LET) radiation and low LET. Since energy is randomly depositedalong high-LET particle paths, RIF along these paths should also berandomly distributed. The probability to induce DSB can be derived fromDNA fragment data measured experimentally by pulsed-field gelelectrophoresis. We used this probability in Monte Carlo simulationsmore » topredict DSB locations in synthetic nuclei geometrically described by acomplete set of human chromosomes, taking into account microscope opticsfrom real experiments. As expected, simulations produced DNA-weightedrandom (Poisson) distributions. In contrast, the distributions of RIFobtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) werenon-random. This deviation from the expected DNA-weighted random patterncan be further characterized by "relative DNA image measurements." Thisnovel imaging approach shows that RIF were located preferentially at theinterface between high and low DNA density regions, and were morefrequent than predicted in regions with lower DNA density. The samepreferential nuclear location was also measured for RIF induced by 1 Gyof low-LET radiation. This deviation from random behavior was evidentonly 5 min after irradiation for phosphorylated ATM RIF, while gammaH2AXand 53BP1 RIF showed pronounced deviations up to 30 min after exposure.These data suggest that DNA damage induced foci are restricted to certainregions of the nucleus of human epithelial cells. It is possible that DNAlesions are collected in these nuclear sub-domains for more efficientrepair.« less

  15. Chromosome Model reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-domains

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.; Ponomarev, Artem; Chen, James L.; Cucinotta, Francis A.; Barcellos-Hoff, Helen

    2007-01-01

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage is induced. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM and gammaH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by relative DNA image measurements. This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent in regions with lower density DNA than predicted. This deviation from random behavior was more pronounced within the first 5 min following irradiation for phosphorylated ATM RIF, while gammaH2AX and 53BP1 RIF showed very pronounced deviation up to 30 min after exposure. These data suggest the existence of repair centers in mammalian epithelial cells. These centers would be nuclear sub-domains where DNA lesions would be collected for more efficient repair.

  16. Entropy Inequalities for Stable Densities and Strengthened Central Limit Theorems

    NASA Astrophysics Data System (ADS)

    Toscani, Giuseppe

    2016-10-01

    We consider the central limit theorem for stable laws in the case of the standardized sum of independent and identically distributed random variables with regular probability density function. By showing decay of different entropy functionals along the sequence we prove convergence with explicit rate in various norms to a Lévy centered density of parameter λ >1 . This introduces a new information-theoretic approach to the central limit theorem for stable laws, in which the main argument is shown to be the relative fractional Fisher information, recently introduced in Toscani (Ricerche Mat 65(1):71-91, 2016). In particular, it is proven that, with respect to the relative fractional Fisher information, the Lévy density satisfies an analogous of the logarithmic Sobolev inequality, which allows to pass from the monotonicity and decay to zero of the relative fractional Fisher information in the standardized sum to the decay to zero in relative entropy with an explicit decay rate.

  17. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  18. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  19. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  20. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  1. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  2. Spacecraft Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, Charles

    The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.

  3. A review of contemporary methods for the presentation of scientific uncertainty.

    PubMed

    Makinson, K A; Hamby, D M; Edwards, J A

    2012-12-01

    Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.

  4. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  5. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  6. Visualizing Time-Varying Distribution Data in EOS Application

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  7. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  8. A matrix contraction process

    NASA Astrophysics Data System (ADS)

    Wilkinson, Michael; Grant, John

    2018-03-01

    We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \

  9. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  10. Experimental and DFT studies on the vibrational spectra of 1H-indene-2-boronic acid

    NASA Astrophysics Data System (ADS)

    Alver, Özgur; Kaya, Mehmet Fatih

    2014-11-01

    Stable conformers and geometrical molecular structures of 1H-indene-2-boronic acid (I-2B(OH)2) were studied experimentally and theoretically using FT-IR and FT-Raman spectroscopic methods. FT-IR and FT-Raman spectra were recorded in the region of 4000-400 cm-1, and 3700-400 cm-1, respectively. The optimized geometric structures were searched by Becke-3-Lee-Yang-Parr (B3LYP) hybrid density functional theory method with 6-31++G(d,p) basis set. Vibrational wavenumbers of I-2B(OH)2 were calculated using B3LYP density functional methods including 6-31++G(d,p) basis set. Experimental and theoretical results show that density functional B3LYP method gives satisfactory results for predicting vibrational wavenumbers except OH stretching modes which is probably due to increasing unharmonicity in the high wave number region and possible intra and inter molecular interaction at OH edges. To support the assigned vibrational wavenumbers, the potential energy distribution (PED) values were also calculated using VEDA 4 (Vibrational Energy Distribution Analysis) program.

  11. SUGGEL: A Program Suggesting the Orbital Angular Momentum of a Neutron Resonance from the Magnitude of its Neutron Width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.Y.

    2001-02-02

    The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less

  12. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  13. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data.

    PubMed

    González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.

  14. Density dependent interactions between VA mycorrhizal fungi and even-aged seedlings of two perennial Fabaceae species.

    PubMed

    Allsopp, N; Stock, W D

    1992-08-01

    The interaction of density and mycorrhizal effects on the growth, mineral nutrition and size distribution of seedlings of two perennial members of the Fabaceae was investigated in pot culture. Seedlings of Otholobium hirtum and Aspalathus linearis were grown at densities of 1, 4, 8 and 16 plants per 13-cm pot with or without vesicular-arbuscular (VA) mycorrhizal inoculum for 120 days. Plant mass, relative growth rates, height and leaf number all decreased with increasing plant density. This was ascribed to the decreasing availability of phosphorus per plant as density increased. O. hirtum was highly dependent on mycorrhizas for P uptake but both mycorrhizal and non-mycorrhizal A. linearis seedlings were able to extract soil P with equal ease. Plant size distribution as measured by the coefficient of variation (CV) of shoot mass was greater at higher densities. CVs of mycorrhizal O. hirtum plants were higher than those of non-mycorrhizal plants. CVs of the facultatively mycorrhizal A. linearis were similar for both mycorrhizal and non-mycorrhizal plants. Higher CVs are attributed to resource preemption by larger individuals. Individuals in populations with high CVs will probably survive stress which would result in the extinction of populations with low CVs. Mass of mycorrhizal plants of both species decreased more rapidly with increasing density than did non-mycorrhizal plant mass. It is concluded that the cost of being mycorrhizal increases as plant density increases, while the benefit decreases. The results suggest that mycorrhizas will influence density-dependent population processes of faculative and obligate mycorrhizal species.

  15. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  16. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  17. An efficient multi-objective optimization method for water quality sensor placement within water distribution systems considering contamination probability variations.

    PubMed

    He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou

    2018-06-20

    Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  19. Origin of heterogeneous spiking patterns from continuously distributed ion channel densities: a computational study in spinal dorsal horn neurons.

    PubMed

    Balachandar, Arjun; Prescott, Steven A

    2018-05-01

    Distinct spiking patterns may arise from qualitative differences in ion channel expression (i.e. when different neurons express distinct ion channels) and/or when quantitative differences in expression levels qualitatively alter the spike generation process. We hypothesized that spiking patterns in neurons of the superficial dorsal horn (SDH) of spinal cord reflect both mechanisms. We reproduced SDH neuron spiking patterns by varying densities of K V 1- and A-type potassium conductances. Plotting the spiking patterns that emerge from different density combinations revealed spiking-pattern regions separated by boundaries (bifurcations). This map suggests that certain spiking pattern combinations occur when the distribution of potassium channel densities straddle boundaries, whereas other spiking patterns reflect distinct patterns of ion channel expression. The former mechanism may explain why certain spiking patterns co-occur in genetically identified neuron types. We also present algorithms to predict spiking pattern proportions from ion channel density distributions, and vice versa. Neurons are often classified by spiking pattern. Yet, some neurons exhibit distinct patterns under subtly different test conditions, which suggests that they operate near an abrupt transition, or bifurcation. A set of such neurons may exhibit heterogeneous spiking patterns not because of qualitative differences in which ion channels they express, but rather because quantitative differences in expression levels cause neurons to operate on opposite sides of a bifurcation. Neurons in the spinal dorsal horn, for example, respond to somatic current injection with patterns that include tonic, single, gap, delayed and reluctant spiking. It is unclear whether these patterns reflect five cell populations (defined by distinct ion channel expression patterns), heterogeneity within a single population, or some combination thereof. We reproduced all five spiking patterns in a computational model by varying the densities of a low-threshold (K V 1-type) potassium conductance and an inactivating (A-type) potassium conductance and found that single, gap, delayed and reluctant spiking arise when the joint probability distribution of those channel densities spans two intersecting bifurcations that divide the parameter space into quadrants, each associated with a different spiking pattern. Tonic spiking likely arises from a separate distribution of potassium channel densities. These results argue in favour of two cell populations, one characterized by tonic spiking and the other by heterogeneous spiking patterns. We present algorithms to predict spiking pattern proportions based on ion channel density distributions and, conversely, to estimate ion channel density distributions based on spiking pattern proportions. The implications for classifying cells based on spiking pattern are discussed. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  20. Twenty-five years of change in southern African passerine diversity: nonclimatic factors of change.

    PubMed

    Péron, Guillaume; Altwegg, Res

    2015-09-01

    We analysed more than 25 years of change in passerine bird distribution in South Africa, Swaziland and Lesotho, to show that species distributions can be influenced by processes that are at least in part independent of the local strength and direction of climate change: land use and ecological succession. We used occupancy models that separate species' detection from species' occupancy probability, fitted to citizen science data from both phases of the Southern African Bird Atlas Project (1987-1996 and 2007-2013). Temporal trends in species' occupancy probability were interpreted in terms of local extinction/colonization, and temporal trends in detection probability were interpreted in terms of change in abundance. We found for the first time at this scale that, as predicted in the context of bush encroachment, closed-savannah specialists increased where open-savannah specialists decreased. In addition, the trend in the abundance of species a priori thought to be favoured by agricultural conversion was negatively correlated with human population density, which is in line with hypotheses explaining the decline in farmland birds in the Northern Hemisphere. In addition to climate, vegetation cover and the intensity and time since agricultural conversion constitute important predictors of biodiversity changes in the region. Their inclusion will improve the reliability of predictive models of species distribution. © 2015 John Wiley & Sons Ltd.

  1. HIGH STAR FORMATION RATES IN TURBULENT ATOMIC-DOMINATED GAS IN THE INTERACTING GALAXIES IC 2163 AND NGC 2207

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G.; Kaufman, Michele; Bournaud, Frédéric

    CO observations of the interacting galaxies IC 2163 and NGC 2207 are combined with HI, H α , and 24 μ m observations to study the star formation rate (SFR) surface density as a function of the gas surface density. More than half of the high-SFR regions are HI dominated. When compared to other galaxies, these HI-dominated regions have excess SFRs relative to their molecular gas surface densities but normal SFRs relative to their total gas surface densities. The HI-dominated regions are mostly located in the outer part of NGC 2207 where the HI velocity dispersion is high, 40–50 kmmore » s{sup −1}. We suggest that the star-forming clouds in these regions have envelopes at lower densities than normal, making them predominantly atomic, and cores at higher densities than normal because of the high turbulent Mach numbers. This is consistent with theoretical predictions of a flattening in the density probability distribution function for compressive, high Mach number turbulence.« less

  2. Increasing power-law range in avalanche amplitude and energy distributions

    NASA Astrophysics Data System (ADS)

    Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard

    2018-02-01

    Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.

  3. Increasing power-law range in avalanche amplitude and energy distributions.

    PubMed

    Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard

    2018-02-01

    Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.

  4. Seasonal changes in spatial patterns of two annual plants in the Chihuahuan Desert, USA

    USGS Publications Warehouse

    Yin, Z.-Y.; Guo, Q.; Ren, H.; Peng, S.-L.

    2005-01-01

    Spatial pattern of a biotic population may change over time as its component individuals grow or die out, but whether this is the case for desert annual plants is largely unknown. Here we examined seasonal changes in spatial patterns of two annuals, Eriogonum abertianum and Haplopappus gracilis, in initial (winter) and final (summer) densities. The density was measured as the number of individuals from 384 permanent quadrats (each 0.5 m × 0.5 m) in the Chihuahuan Desert near Portal, Arizona, USA. We used three probability distributions (binomial, Poisson, and negative binomial or NB) that represent three basic spatial patterns (regular, random, and clumped) to fit the observed frequency distributions of densities of the two annuals. Both species showed clear clumped patterns as characterized by the NB and had similar inverse J-shaped frequency distribution curves in two density categories. Also, both species displayed a reduced degree of aggregation from winter to summer after the spring drought (massive die-off), as indicated by the increased k-parameter of the NB and decreased values of another NB parameter p, variance/mean ratio, Lloyd’s Index of Patchiness, and David and Moore’s Index of Clumping. Further, we hypothesized that while the NB (i.e., Poisson-logarithmic) well fits the distribution of individuals per quadrat, its components, the Poisson and logarithmic, may describe the distributions of clumps per quadrat and of individuals per clump, respectively. We thus obtained the means and variances for (1) individuals per quadrat, (2) clumps per quadrat, and (3) individuals per clump. The results showed that the decrease of the density from winter to summer for each plant resulted from the decrease of individuals per clump, rather than from the decrease of clumps per quadrat. The great similarities between the two annuals indicate that our observed temporal changes in spatial patterns may be common among desert annual plants.

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. Temperature as a third dimension in column-density mapping of dusty astrophysical structures associated with star formation

    NASA Astrophysics Data System (ADS)

    Marsh, K. A.; Whitworth, A. P.; Lomax, O.

    2015-12-01

    We present point process mapping (PPMAP), a Bayesian procedure that uses images of dust continuum emission at multiple wavelengths to produce resolution-enhanced image cubes of differential column density as a function of dust temperature and position. PPMAP is based on the generic `point process formalism, whereby the system of interest (in this case, a dusty astrophysical structure such as a filament or pre-stellar core) is represented by a collection of points in a suitably defined state space. It can be applied to a variety of observational data, such as Herschel images, provided only that the image intensity is delivered by optically thin dust in thermal equilibrium. PPMAP takes full account of the instrumental point-spread functions and does not require all images to be degraded to the same resolution. We present the results of testing using simulated data for a pre-stellar core and a fractal turbulent cloud, and demonstrate its performance with real data from the Herschel infrared Galactic Plane Survey (Hi-GAL). Specifically, we analyse observations of a large filamentary structure in the CMa OB1 giant molecular cloud. Histograms of differential column density indicate that the warm material (T ≳ 13 K) is distributed lognormally, consistent with turbulence, but the column densities of the cooler material are distributed as a high-density tail, consistent with the effects of self-gravity. The results illustrate the potential of PPMAP to aid in distinguishing between different physical components along the line of sight in star-forming clouds, and aid the interpretation of the associated Probability distribution functions (PDFs) of column density.

  7. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.

  8. Pollinator communities in strawberry crops - variation at multiple spatial scales.

    PubMed

    Ahrenfeldt, E J; Klatt, B K; Arildsen, J; Trandem, N; Andersson, G K S; Tscharntke, T; Smith, H G; Sigsgaard, L

    2015-08-01

    Predicting potential pollination services of wild bees in crops requires knowledge of their spatial distribution within fields. Field margins can serve as nesting and foraging habitats for wild bees and can be a source of pollinators. Regional differences in pollinator community composition may affect this spill-over of bees. We studied how regional and local differences affect the spatial distribution of wild bee species richness, activity-density and body size in crop fields. We sampled bees both from the field centre and at two different types of semi-natural field margins, grass strips and hedges, in 12 strawberry fields. The fields were distributed over four regions in Northern Europe, representing an almost 1100 km long north-south gradient. Even over this gradient, daytime temperatures during sampling did not differ significantly between regions and did therefore probably not impact bee activity. Bee species richness was higher in field margins compared with field centres independent of field size. However, there was no difference between centre and margin in body-size or activity-density. In contrast, bee activity-density increased towards the southern regions, whereas the mean body size increased towards the north. In conclusion, our study revealed a general pattern across European regions of bee diversity, but not activity-density, declining towards the field interior which suggests that the benefits of functional diversity of pollinators may be difficult to achieve through spill-over effects from margins to crop. We also identified dissimilar regional patterns in bee diversity and activity-density, which should be taken into account in conservation management.

  9. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  10. Poincaré recurrence statistics as an indicator of chaos synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boev, Yaroslav I., E-mail: boev.yaroslav@gmail.com; Vadivasova, Tatiana E., E-mail: vadivasovate@yandex.ru; Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru

    The dynamics of the autonomous and non-autonomous Rössler system is studied using the Poincaré recurrence time statistics. It is shown that the probability distribution density of Poincaré recurrences represents a set of equidistant peaks with the distance that is equal to the oscillation period and the envelope obeys an exponential distribution. The dimension of the spatially uniform Rössler attractor is estimated using Poincaré recurrence times. The mean Poincaré recurrence time in the non-autonomous Rössler system is locked by the external frequency, and this enables us to detect the effect of phase-frequency synchronization.

  11. A Probabilistic, Facility-Centric Approach to Lightning Strike Location

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.

    2012-01-01

    A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  12. A Bayesian Approach to Magnetic Moment Determination Using μSR

    NASA Astrophysics Data System (ADS)

    Blundell, S. J.; Steele, A. J.; Lancaster, T.; Wright, J. D.; Pratt, F. L.

    A significant challenge in zero-field μSR experiments arises from the uncertainty in the muon site. It is possible to calculate the dipole field (and hence precession frequency v) at any particular site given the magnetic moment μ and magnetic structure. One can also evaluate f(v), the probability distribution function of v assuming that the muon site can be anywhere within the unit cell with equal probability, excluding physically forbidden sites. Since v is obtained from experiment, what we would like to know is g(μjv), the probability density function of μ given the observed v. This can be obtained from our calculated f(v/μ) using Bayes' theorem. We describe an approach to this problem which we have used to extract information about real systems including a low-moment osmate compound, a family of molecular magnets, and an iron-arsenide compound.

  13. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  14. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  15. Visualization of spatial patterns and temporal trends for aerial surveillance of illegal oil discharges in western Canadian marine waters.

    PubMed

    Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald

    2008-05-01

    This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.

  16. Uncertainty squared: Choosing among multiple input probability distributions and interpreting multiple output probability distributions in Monte Carlo climate risk models

    NASA Astrophysics Data System (ADS)

    Baer, P.; Mastrandrea, M.

    2006-12-01

    Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly favor one range of probabilistic projections over another, that the choice of results on which to base policy must necessarily involve ethical considerations, as they have inevitable consequences for the distribution of risk In particular, the choice to use a more "optimistic" PDF for climate sensitivity (or other components of the causal chain) leads to the allowance of higher emissions consistent with any specified goal for risk reduction, and thus leads to higher climate impacts, in exchange for lower mitigation costs.

  17. Work statistics of charged noninteracting fermions in slowly changing magnetic fields.

    PubMed

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society

  18. Work statistics of charged noninteracting fermions in slowly changing magnetic fields

    NASA Astrophysics Data System (ADS)

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.

  19. A Bayesian approach to microwave precipitation profile retrieval

    NASA Technical Reports Server (NTRS)

    Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.

    1995-01-01

    A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.

  20. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

Top