Sample records for randomly distributed point

  1. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  2. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  3. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  4. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  5. A random wave model for the Aharonov-Bohm effect

    NASA Astrophysics Data System (ADS)

    Houston, Alexander J. H.; Gradhand, Martin; Dennis, Mark R.

    2017-05-01

    We study an ensemble of random waves subject to the Aharonov-Bohm effect. The introduction of a point with a magnetic flux of arbitrary strength into a random wave ensemble gives a family of wavefunctions whose distribution of vortices (complex zeros) is responsible for the topological phase associated with the Aharonov-Bohm effect. Analytical expressions are found for the vortex number and topological charge densities as functions of distance from the flux point. Comparison is made with the distribution of vortices in the isotropic random wave model. The results indicate that as the flux approaches half-integer values, a vortex with the same sign as the fractional part of the flux is attracted to the flux point, merging with it in the limit of half-integer flux. We construct a statistical model of the neighbourhood of the flux point to study how this vortex-flux merger occurs in more detail. Other features of the Aharonov-Bohm vortex distribution are also explored.

  6. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  7. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  8. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  9. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    PubMed

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  10. Randomness versus specifics for word-frequency distributions

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoyong; Minnhagen, Petter

    2016-02-01

    The text-length-dependence of real word-frequency distributions can be connected to the general properties of a random book. It is pointed out that this finding has strong implications, when deciding between two conceptually different views on word-frequency distributions, i.e. the specific 'Zipf's-view' and the non-specific 'Randomness-view', as is discussed. It is also noticed that the text-length transformation of a random book does have an exact scaling property precisely for the power-law index γ = 1, as opposed to the Zipf's exponent γ = 2 and the implication of this exact scaling property is discussed. However a real text has γ > 1 and as a consequence γ increases when shortening a real text. The connections to the predictions from the RGF (Random Group Formation) and to the infinite length-limit of a meta-book are also discussed. The difference between 'curve-fitting' and 'predicting' word-frequency distributions is stressed. It is pointed out that the question of randomness versus specifics for the distribution of outcomes in case of sufficiently complex systems has a much wider relevance than just the word-frequency example analyzed in the present work.

  11. Models for the hotspot distribution

    NASA Technical Reports Server (NTRS)

    Jurdy, Donna M.; Stefanick, Michael

    1990-01-01

    Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.

  12. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  13. Urban tree cover change in Detroit and Atlanta, USA, 1951-2010

    Treesearch

    Krista Merry; Jacek Siry; Pete Bettinger; J.M. Bowker

    2014-01-01

    We assessed tree cover using random points and polygons distributed within the administrative boundaries of Detroit, MI and Atlanta, GA. Two approaches were tested, a point-based approach using 1000 randomly located sample points, and polygon-based approach using 250 circular areas, 200 m in radius (12.56 ha). In the case of Atlanta, both approaches arrived at similar...

  14. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  15. Learning stochastic reward distributions in a speeded pointing task.

    PubMed

    Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C

    2008-04-23

    Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.

  16. Tree species exhibit complex patterns of distribution in bottomland hardwood forests

    Treesearch

    Luben D Dimov; Jim L Chambers; Brian R. Lockhart

    2013-01-01

    & Context Understanding tree interactions requires an insight into their spatial distribution. & Aims We looked for presence and extent of tree intraspecific spatial point pattern (random, aggregated, or overdispersed) and interspecific spatial point pattern (independent, aggregated, or segregated). & Methods We established twelve 0.64-ha plots in natural...

  17. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  18. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  19. A heuristic for the distribution of point counts for random curves over a finite field.

    PubMed

    Achter, Jeffrey D; Erman, Daniel; Kedlaya, Kiran S; Wood, Melanie Matchett; Zureick-Brown, David

    2015-04-28

    How many rational points are there on a random algebraic curve of large genus g over a given finite field Fq? We propose a heuristic for this question motivated by a (now proven) conjecture of Mumford on the cohomology of moduli spaces of curves; this heuristic suggests a Poisson distribution with mean q+1+1/(q-1). We prove a weaker version of this statement in which g and q tend to infinity, with q much larger than g. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. Wigner surmises and the two-dimensional homogeneous Poisson point process.

    PubMed

    Sakhr, Jamal; Nieminen, John M

    2006-04-01

    We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.

  1. Break Point Distribution on Chromosome 3 of Human Epithelial Cells exposed to Gamma Rays, Neutrons and Fe Ions

    NASA Technical Reports Server (NTRS)

    Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.

  2. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  3. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  4. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  5. Explore Stochastic Instabilities of Periodic Points by Transition Path Theory

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Lin, Ling; Zhou, Xiang

    2016-06-01

    We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.

  6. Random crystal field effects on the integer and half-integer mixed-spin system

    NASA Astrophysics Data System (ADS)

    Yigit, Ali; Albayrak, Erhan

    2018-05-01

    In this work, we have focused on the random crystal field effects on the phase diagrams of the mixed spin-1 and spin-5/2 Ising system obtained by utilizing the exact recursion relations (ERR) on the Bethe lattice (BL). The distribution function P(Di) = pδ [Di - D(1 + α) ] +(1 - p) δ [Di - D(1 - α) ] is used to randomize the crystal field.The phase diagrams are found to exhibit second- and first-order phase transitions depending on the values of α, D and p. It is also observed that the model displays tricritical point, isolated point, critical end point and three compensation temperatures for suitable values of the system parameters.

  7. Gradients estimation from random points with volumetric tensor in turbulence

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  8. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2014-12-15

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less

  9. Declines in moose population density at Isle Royle National Park, MI, USA and accompanied changes in landscape patterns

    USGS Publications Warehouse

    De Jager, N. R.; Pastor, J.

    2009-01-01

    Ungulate herbivores create patterns of forage availability, plant species composition, and soil fertility as they range across large landscapes and consume large quantities of plant material. Over time, herbivore populations fluctuate, producing great potential for spatio-temporal landscape dynamics. In this study, we extend the spatial and temporal extent of a long-term investigation of the relationship of landscape patterns to moose foraging behavior at Isle Royale National Park, MI. We examined how patterns of browse availability and consumption, plant basal area, and soil fertility changed during a recent decline in the moose population. We used geostatistics to examine changes in the nature of spatial patterns in two valleys over 18 years and across short-range and long-range distance scales. Landscape patterns of available and consumed browse changed from either repeated patches or randomly distributed patches in 1988-1992 to random point distributions by 2007 after a recent record high peak followed by a rapid decline in the moose population. Patterns of available and consumed browse became decoupled during the moose population low, which is in contrast to coupled patterns during the earlier high moose population. Distributions of plant basal area and soil nitrogen availability also switched from repeated patches to randomly distributed patches in one valley and to random point distributions in the other valley. Rapid declines in moose population density may release vegetation and soil fertility from browsing pressure and in turn create random landscape patterns. ?? Springer Science+Business Media B.V. 2009.

  10. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  11. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  12. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  13. Reflectorized license plates : do they reduce nighttime rear-end collisions?.

    DOT National Transportation Integrated Search

    1974-01-01

    The Commonwealth of Virginia randomly distributed 100,000 sets of experimental reflectorized and 100,000 sets of control nonreflective 1971 license plates, Each Division of Motor Vehicles distribution point in the state received and sold a pro rata n...

  14. Entanglement spectrum of random-singlet quantum critical points

    NASA Astrophysics Data System (ADS)

    Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.

    2011-01-01

    The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.

  15. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  16. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  17. Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach

    NASA Astrophysics Data System (ADS)

    Comolli, Alessandro; Dentz, Marco

    2017-09-01

    We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in heterogeneous natural and engineered porous materials. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  18. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  19. Hessian eigenvalue distribution in a random Gaussian landscape

    NASA Astrophysics Data System (ADS)

    Yamada, Masaki; Vilenkin, Alexander

    2018-03-01

    The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/ N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.

  20. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  1. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    NASA Astrophysics Data System (ADS)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  2. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  3. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  4. QCD-inspired spectra from Blue's functions

    NASA Astrophysics Data System (ADS)

    Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail

    1996-02-01

    We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.

  5. The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial

    ERIC Educational Resources Information Center

    Crissinger, Bryan R.

    2015-01-01

    Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  7. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  8. Point process statistics in atom probe tomography.

    PubMed

    Philippe, T; Duguay, S; Grancher, G; Blavette, D

    2013-09-01

    We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  10. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  11. Random oligonucleotide mutagenesis: application to a large protein coding sequence of a major histocompatibility complex class I gene, H-2DP.

    PubMed Central

    Murray, R; Pederson, K; Prosser, H; Muller, D; Hutchison, C A; Frelinger, J A

    1988-01-01

    We have used random oligonucleotide mutagenesis (or saturation mutagenesis) to create a library of point mutations in the alpha 1 protein domain of a Major Histocompatibility Complex (MHC) molecule. This protein domain is critical for T cell and B cell recognition. We altered the MHC class I H-2DP gene sequence such that synthetic mutant alpha 1 exons (270 bp of coding sequence), which contain mutations identified by sequence analysis, can replace the wild type alpha 1 exon. The synthetic exons were constructed from twelve overlapping oligonucleotides which contained an average of 1.3 random point mutations per intact exon. DNA sequence analysis of mutant alpha 1 exons has shown a point mutant distribution that fits a Poisson distribution, and thus emphasizes the utility of this mutagenesis technique to "scan" a large protein sequence for important mutations. We report our use of saturation mutagenesis to scan an entire exon of the H-2DP gene, a cassette strategy to replace the wild type alpha 1 exon with individual mutant alpha 1 exons, and analysis of mutant molecules expressed on the surface of transfected mouse L cells. Images PMID:2903482

  12. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  13. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  14. Probabilistic analysis of structures involving random stress-strain behavior

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Thacker, B. H.; Harren, S. V.

    1991-01-01

    The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.

  15. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  16. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  17. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  18. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  19. Dynamic laser speckle analyzed considering inhomogeneities in the biological sample

    NASA Astrophysics Data System (ADS)

    Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico

    2017-04-01

    Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.

  20. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  1. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  2. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. An application of randomization for detecting evidence of thermoregulation in timber rattlesnakes (Crotalus horridus) from northwest Arkansas.

    PubMed

    Wills, C A; Beaupre, S J

    2000-01-01

    Most reptiles maintain their body temperatures within normal functional ranges through behavioral thermoregulation. Under some circumstances, thermoregulation may be a time-consuming activity, and thermoregulatory needs may impose significant constraints on the activities of ectotherms. A necessary (but not sufficient) condition for demonstrating thermoregulation is a difference between observed body temperature distributions and available operative temperature distributions. We examined operative and body temperature distributions of the timber rattlesnake (Crotalus horridus) for evidence of thermoregulation. Specifically, we compared the distribution of available operative temperatures in the environment to snake body temperatures during August and September. Operative temperatures were measured using 48 physical models that were randomly deployed in the environment and connected to a Campbell CR-21X data logger. Body temperatures (n=1,803) were recorded from 12 radiotagged snakes using temperature-sensitive telemetry. Separate randomization tests were conducted for each hour of day within each month. Actual body temperature distributions differed significantly from operative temperature distributions at most time points considered. Thus, C. horridus exhibits a necessary (but not sufficient) condition for demonstrating thermoregulation. However, unlike some desert ectotherms, we found no compelling evidence for thermal constraints on surface activity. Randomization may prove to be a powerful technique for drawing inferences about thermoregulation without reliance on studies of laboratory thermal preference.

  4. Significant locations in auxiliary data as seeds for typical use cases of point clustering

    NASA Astrophysics Data System (ADS)

    Kröger, Johannes

    2018-05-01

    Random greedy clustering and grid-based clustering are highly susceptible by their initial parameters. When used for point data clustering in maps they often change the apparent distribution of the underlying data. We propose a process that uses precomputed weighted seed points for the initialization of clusters, for example from local maxima in population density data. Exemplary results from the clustering of a dataset of petrol stations are presented.

  5. Offdiagonal complexity: A computationally quick complexity measure for graphs and networks

    NASA Astrophysics Data System (ADS)

    Claussen, Jens Christian

    2007-02-01

    A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.

  6. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the fourth power fixed effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F distribution, on an order statistics distribution of Cochran's, and on a combination of the two. Results of the comparisons and a recommended strategy are given.

  7. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  8. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  9. Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis

    PubMed Central

    Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325

  10. Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis.

    PubMed

    Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm(3) and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.

  11. Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings

    NASA Astrophysics Data System (ADS)

    Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső

    We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.

  12. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  13. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  14. Defect-induced change of temperature-dependent elastic constants in BCC iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, N.; Setyawan, W.; Zhang, S. H.

    2017-07-01

    The effects of radiation-induced defects (randomly distributed vacancies, voids, and interstitial dislocation loops) on temperature-dependent elastic constants, C11, C12, and C44 in BCC iron, are studied with molecular dynamics method. The elastic constants are found to decrease with increasing temperatures for all cases containing different defects. The presence of vacancies, voids, or interstitial loops further decreases the elastic constants. For a given number of point defects, the randomly distributed vacancies show the strongest effect compared to voids or interstitial loops. All these results are expected to provide useful information to combine with experimental results for further understanding of radiation damage.

  15. Magneto-transport properties of a random distribution of few-layer graphene patches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iacovella, Fabrice; Mitioglu, Anatolie; Pierre, Mathieu

    In this study, we address the electronic properties of conducting films constituted of an array of randomly distributed few layer graphene patches and investigate on their most salient galvanometric features in the moderate and extreme disordered limit. We demonstrate that, in annealed devices, the ambipolar behaviour and the onset of Landau level quantization in high magnetic field constitute robust hallmarks of few-layer graphene films. In the strong disorder limit, however, the magneto-transport properties are best described by a variable-range hopping behaviour. A large negative magneto-conductance is observed at the charge neutrality point, in consistency with localized transport regime.

  16. Effect of platykurtic and leptokurtic distributions in the random-field Ising model: mean-field approach.

    PubMed

    Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O

    2009-07-01

    The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).

  17. Effect of Rayleigh-scattering distributed feedback on multiwavelength Raman fiber laser generation.

    PubMed

    El-Taher, A E; Harper, P; Babin, S A; Churkin, D V; Podivilov, E V; Ania-Castanon, J D; Turitsyn, S K

    2011-01-15

    We experimentally demonstrate a Raman fiber laser based on multiple point-action fiber Bragg grating reflectors and distributed feedback via Rayleigh scattering in an ~22-km-long optical fiber. Twenty-two lasing lines with spacing of ~100 GHz (close to International Telecommunication Union grid) in the C band are generated at the watt level. In contrast to the normal cavity with competition between laser lines, the random distributed feedback cavity exhibits highly stable multiwavelength generation with a power-equalized uniform distribution, which is almost independent on power.

  18. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  19. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  20. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  1. TRUNCATED RANDOM MEASURES

    DTIC Science & Technology

    2018-01-12

    sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the

  2. The correlation function for density perturbations in an expanding universe. II - Nonlinear theory

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1977-01-01

    A formalism is developed to find the two-point and higher-order correlation functions for a given distribution of sizes and shapes of perturbations which are randomly placed in three-dimensional space. The perturbations are described by two parameters such as central density and size, and the two-point correlation function is explicitly related to the luminosity function of groups and clusters of galaxies

  3. All about Eve: Secret Sharing using Quantum Effects

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    2005-01-01

    This document discusses the nature of light (including classical light and photons), encryption, quantum key distribution (QKD), light polarization and beamsplitters and their application to information communication. A quantum of light represents the smallest possible subdivision of radiant energy (light) and is called a photon. The QKD key generation sequence is outlined including the receiver broadcasting the initial signal indicating reception availability, timing pulses from the sender to provide reference for gated detection of photons, the sender generating photons through random polarization while the receiver detects photons with random polarization and communicating via data link to mutually establish random keys. The QKD network vision includes inter-SATCOM, point-to-point Gnd Fiber and SATCOM-fiber nodes. QKD offers an unconditionally secure method of exchanging encryption keys. Ongoing research will focus on how to increase the key generation rate.

  4. Effect of two laser photobiomodulation application protocols on the viability of random skin flap in rats

    NASA Astrophysics Data System (ADS)

    Martignago, C. C. S. M.; Tim, C. R.; Assis, L.; Neve, L. M. G.; Bossini, P. S.; Renno, A. C.; Avó, L. R. S.; Liebano, R. E.; Parizotto, N. A.

    2018-02-01

    Objective: to identify the best low intensity laser photobiomodulation application site to increase the viability of the cutaneous flap in rats. Methods: 18 male rats (Rattus norvegicus: var. Albinus, Rodentia Mammalia) were randomly distributed into 3 groups (n = 6). Group I (GI) was submitted to simulated laser photobiomodulation, group II (GII) was submitted to the laser photobiomodulation at three points in the flap cranial base, and group III (GIII) was submitted to laser photobiomodulation at twelve points distributed along the flap. All groups were irradiated with an Indium, Galium, Aluminum and Phosphorus diode laser (InGaAlP), 660 nm, with power of 50 mW, total energy of 12 J in continuous emission mode. The treatment started immediately after performing the cranial base random skin flap (dimension of 10X4 cm2 ) and reapplied every 24 hours, with a total of 5 applications. The animals were euthanized after the evaluation of the percentage of necrosis area and the material was collected for histological analysis on the 7th postoperative day. Results: GII animals presented a statistically significant decrease for the necrosis area when compared to the other groups, and a statistically significant increase in the quantification of collagen when compared to the control. We did not observe a statistical difference between the TGFβ and FGF expression in the different groups evaluated. Conclusion: the application of laser photobiomodulation at three points of the flap cranial base was more effective than at twelve points regarding the reduction of necrosis area.

  5. Stochastic modelling for lake thermokarst and peatland patterns in permafrost and near permafrost zones

    NASA Astrophysics Data System (ADS)

    Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey

    2017-04-01

    Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-

  6. 3D vector distribution of the electro-magnetic fields on a random gold film

    NASA Astrophysics Data System (ADS)

    Canneson, Damien; Berini, Bruno; Buil, Stéphanie; Hermier, Jean-Pierre; Quélin, Xavier

    2018-05-01

    The 3D vector distribution of the electro-magnetic fields at the very close vicinity of the surface of a random gold film is studied. Such films are well known for their properties of light confinement and large fluctuations of local density of optical states. Using Finite-Difference Time-Domain simulations, we show that it is possible to determine the local orientation of the electro-magnetic fields. This allows us to obtain a complete characterization of the fields. Large fluctuations of their amplitude are observed as previously shown. Here, we demonstrate large variations of their direction depending both on the position on the random gold film, and on the distance to it. Such characterization could be useful for a better understanding of applications like the coupling of point-like dipoles to such films.

  7. Advanced analysis of forest fire clustering

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Pereira, Mario; Golay, Jean

    2017-04-01

    Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.

  8. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  9. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  10. Random Matrix Theory and Elliptic Curves

    DTIC Science & Technology

    2014-11-24

    distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann

  11. Discovery of Non-random Spatial Distribution of Impacts in the Stardust Cometary Collector

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Westphal, Andrew J.; Gainsforth, Zack; Borg, Janet; Djouadi, Zahia; Bridges, John; Franchi, Ian; Brownlee, Donald E.; Cheng. Andrew F.; Clark, Benton C.; hide

    2007-01-01

    We report the discovery that impacts in the Stardust cometary collector are not distributed randomly in the collecting media, but appear to be clustered on scales smaller than 10 cm. We also report the discovery of at least two populations of oblique tracks. We evaluated several hypotheses that could explain the observations. No hypothesis was consistent with all the observations, but the preponderance of evidence points toward at least one impact on the central Whipple shield of the spacecraft as the origin of both clustering and low-angle oblique tracks. High-angle oblique tracks unambiguously originate from a non-cometary impact on the spacecraft bus just forward of the collector.

  12. A randomization approach to handling data scaling in nuclear medicine.

    PubMed

    Bai, Chuanyong; Conwell, Richard; Kindem, Joel

    2010-06-01

    In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.

  13. Effect of randomness in logistic maps

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2015-01-01

    We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However , averaged over different realizations reaches a fixed point. For 1 ≤ at ≤ 4, the system shows nonchaotic behavior and the Lyapunov exponent is strongly dependent on the asymmetry of the distribution from which at is drawn. Chaotic behavior is seen to occur beyond a threshold value of q1(q2) when q2(q1) is varied. The most striking result is that the random map is chaotic even when q2 is less than the threshold value 3.5699⋯ at which chaos occurs in the nonrandom map. We also employ a different method in which a different set of random variables are used for the evolution of two initially identical x values, here the chaotic regime exists for all q1 ≠ q2 values.

  14. Anomalous Dirac point transport due to extended defects in bilayer graphene.

    PubMed

    Shallcross, Sam; Sharma, Sangeeta; Weber, Heiko B

    2017-08-24

    Charge transport at the Dirac point in bilayer graphene exhibits two dramatically different transport states, insulating and metallic, that occur in apparently otherwise indistinguishable experimental samples. We demonstrate that the existence of these two transport states has its origin in an interplay between evanescent modes, that dominate charge transport near the Dirac point, and disordered configurations of extended defects in the form of partial dislocations. In a large ensemble of bilayer systems with randomly positioned partial dislocations, the distribution of conductivities is found to be strongly peaked at both the insulating and metallic limits. We argue that this distribution form, that occurs only at the Dirac point, lies at the heart of the observation of both metallic and insulating states in bilayer graphene.In seemingly indistinguishable bilayer graphene samples, two distinct transport regimes, insulating and metallic, have been identified experimentally. Here, the authors demonstrate that these two states originate from the interplay between extended defects and evanescent modes at the Dirac point.

  15. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  16. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  17. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  18. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  19. Comparison of modal test results - Multipoint sine versus single-point random. [for Mariner Jupiter/Saturn spacecraft

    NASA Technical Reports Server (NTRS)

    Leppert, E. L.; Lee, S. H.; Day, F. D.; Chapman, P. C.; Wada, B. K.

    1976-01-01

    The Mariner Jupiter/Saturn (MJS) spacecraft was subjected to the traditional multipoint sine dwell (MPSD) modal test using 111 accelerometer channels, and also to single-point random (SPR) testing using 26 accelerometer channels, and the two methods are compared according to cost, schedule, and technical criteria. A measure of comparison between the systems was devised in terms of the cumulative difference in the kinetic energy distribution of the common accelerometers. The SPR and MPSD method show acceptable agreement with respect to frequencies and mode damping. The merit of the SPR method is that the excitation points are minimized and the test article can be committed to other uses while data analysis is performed. The MPSD approach allows validity of the data to be determined as the test progresses. Costs are about the same for the two methods.

  20. Extracting Information about the Rotator Cuff from Magnetic Resonance Images Using Deterministic and Random Techniques

    PubMed Central

    De Los Ríos, F. A.; Paluszny, M.

    2015-01-01

    We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281

  1. Communication: Photoionization of degenerate orbitals for randomly oriented molecules: The effect of time-reversal symmetry on recoil-ion momentum angular distributions

    NASA Astrophysics Data System (ADS)

    Suzuki, Yoshi-Ichi

    2018-04-01

    The photoelectron asymmetry parameter β, which characterizes the direction of electrons ejected from a randomly oriented molecular ensemble by linearly polarized light, is investigated for degenerate orbitals. We show that β is totally symmetric under the symmetry operation of the point group of a molecule, and it has mixed properties under time reversal. Therefore, all degenerate molecular orbitals, except for the case of degeneracy due to time reversal, have the same β (Wigner-Eckart theorem). The exceptions are e-type complex orbitals of the Cn, Sn, Cnh, T, and Th point groups, and calculations on boric acid (C3h symmetry) are performed as an example. However, including those point groups, all degenerate orbitals have the same β if those orbitals are real. We discuss the implications of this operator formalism for molecular alignment and photoelectron circular dichroism.

  2. Theoretical Calculation of the Power Spectra of the Rolling and Yawing Moments on a Wing in Random Turbulence

    NASA Technical Reports Server (NTRS)

    Eggleston, John M; Diederich, Franklin W

    1957-01-01

    The correlation functions and power spectra of the rolling and yawing moments on an airplane wing due to the three components of continuous random turbulence are calculated. The rolling moments to the longitudinal (horizontal) and normal (vertical) components depend on the spanwise distributions of instantaneous gust intensity, which are taken into account by using the inherent properties of symmetry of isotropic turbulence. The results consist of expressions for correlation functions or spectra of the rolling moment in terms of the point correlation functions of the two components of turbulence. Specific numerical calculations are made for a pair of correlation functions given by simple analytic expressions which fit available experimental data quite well. Calculations are made for four lift distributions. Comparison is made with the results of previous analyses which assumed random turbulence along the flight path and linear variations of gust velocity across the span.

  3. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  4. Turbulent transport with intermittency: Expectation of a scalar concentration.

    PubMed

    Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D

    2016-04-01

    Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.

  5. Superslow relaxation in identical phase oscillators with random and frustrated interactions

    NASA Astrophysics Data System (ADS)

    Daido, H.

    2018-04-01

    This paper is concerned with the relaxation dynamics of a large population of identical phase oscillators, each of which interacts with all the others through random couplings whose parameters obey the same Gaussian distribution with the average equal to zero and are mutually independent. The results obtained by numerical simulation suggest that for the infinite-size system, the absolute value of Kuramoto's order parameter exhibits superslow relaxation, i.e., 1/ln t as time t increases. Moreover, the statistics on both the transient time T for the system to reach a fixed point and the absolute value of Kuramoto's order parameter at t = T are also presented together with their distribution densities over many realizations of the coupling parameters.

  6. IKONOS geometric characterization

    USGS Publications Warehouse

    Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter

    2003-01-01

    The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.

  7. Distribution function of random strains in an elastically anisotropic continuum and defect strengths of T m3 + impurity ions in crystals with zircon structure

    NASA Astrophysics Data System (ADS)

    Malkin, B. Z.; Abishev, N. M.; Baibekov, E. I.; Pytalev, D. S.; Boldyrev, K. N.; Popova, M. N.; Bettinelli, M.

    2017-07-01

    We construct a distribution function of the strain-tensor components induced by point defects in an elastically anisotropic continuum, which can be used to account quantitatively for many effects observed in different branches of condensed matter physics. Parameters of the derived six-dimensional generalized Lorentz distribution are expressed through the integrals computed over the array of strains. The distribution functions for the cubic diamond and elpasolite crystals and tetragonal crystals with the zircon and scheelite structures are presented. Our theoretical approach is supported by a successful modeling of specific line shapes of singlet-doublet transitions of the T m3 + ions doped into AB O4 (A =Y , Lu; B =P , V) crystals with zircon structure, observed in high-resolution optical spectra. The values of the defect strengths of impurity T m3 + ions in the oxygen surroundings, obtained as a result of this modeling, can be used in future studies of random strains in different rare-earth oxides.

  8. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  9. Numerical study of the directed polymer in a 1 + 3 dimensional random medium

    NASA Astrophysics Data System (ADS)

    Monthus, C.; Garel, T.

    2006-09-01

    The directed polymer in a 1+3 dimensional random medium is known to present a disorder-induced phase transition. For a polymer of length L, the high temperature phase is characterized by a diffusive behavior for the end-point displacement R2 ˜L and by free-energy fluctuations of order ΔF(L) ˜O(1). The low-temperature phase is characterized by an anomalous wandering exponent R2/L ˜Lω and by free-energy fluctuations of order ΔF(L) ˜Lω where ω˜0.18. In this paper, we first study the scaling behavior of various properties to localize the critical temperature Tc. Our results concerning R2/L and ΔF(L) point towards 0.76 < Tc ≤T2=0.79, so our conclusion is that Tc is equal or very close to the upper bound T2 derived by Derrida and coworkers (T2 corresponds to the temperature above which the ratio bar{Z_L^2}/(bar{Z_L})^2 remains finite as L ↦ ∞). We then present histograms for the free-energy, energy and entropy over disorder samples. For T ≫Tc, the free-energy distribution is found to be Gaussian. For T ≪Tc, the free-energy distribution coincides with the ground state energy distribution, in agreement with the zero-temperature fixed point picture. Moreover the entropy fluctuations are of order ΔS ˜L1/2 and follow a Gaussian distribution, in agreement with the droplet predictions, where the free-energy term ΔF ˜Lω is a near cancellation of energy and entropy contributions of order L1/2.

  10. Diffusion in randomly perturbed dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer

    2014-11-01

    Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.

  11. Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zhu, Tao; Zheng, Hua; Kuang, Yang; Liu, Min; Huang, Wei

    2017-04-01

    The round trip time of the light pulse limits the maximum detectable frequency response range of vibration in phase-sensitive optical time domain reflectometry (φ-OTDR). We propose a method to break the frequency response range restriction of φ-OTDR system by modulating the light pulse interval randomly which enables a random sampling for every vibration point in a long sensing fiber. This sub-Nyquist randomized sampling method is suits for detecting sparse-wideband- frequency vibration signals. Up to MHz resonance vibration signal with over dozens of frequency components and 1.153MHz single frequency vibration signal are clearly identified for a sensing range of 9.6km with 10kHz maximum sampling rate.

  12. Community point distribution of insecticide-treated bed nets and community health worker hang-up visits in rural Zambia: a decision-focused evaluation.

    PubMed

    Wang, Paul; Connor, Alison L; Joudeh, Ammar S; Steinberg, Jeffrey; Ndhlovu, Ketty; Siyolwe, Musanda; Ntebeka, Bristol; Chibuye, Benjamin; Hamainza, Busiku

    2016-03-03

    In 2013, the Zambian Ministry of Health through its National Malaria Control Programme distributed over two million insecticide-treated bed nets (ITNs) in four provinces using a door-to-door distribution strategy, and more than 6 million ITNs were allocated to be distributed in 2014. This study was commissioned to measure attendance rates at a community point distribution and to examine the impact of follow-up community health worker (CHW) hang-up visits on short and medium-term ITN retention and usage with a view of informing optimal ITN distribution strategy in Zambia. Households received ITNs at community point distributions conducted in three rural communities in Rufunsa District, Zambia. Households were then randomly allocated into five groups to receive CHW visits to hang any unhung ITNs at different intervals: 1-3, 5-7, 10-12, 15-17 days, and no hang-up visit. Follow-up surveys were conducted among all households at 7-11 weeks after distribution and at 5-6 months after distribution to measure short- and medium-term household retention and usage of ITNs. Of the 560 pre-registered households, 540 (96.4 %) attended the community point distribution. Self-installation of ITNs by households increased over the first 10 days after the community point distribution. Retention levels remained high over time with 90.2 % of distributed ITNs still in the household at 7-11 weeks and 85.7 % at 5-6 months. Retention did not differ between households that received a CHW visit and those that did not. At 7-11 weeks, households had an average of 73.8 % of sleeping spaces covered compared to 80.3 % at 5-6 months. On average, 65.6 % of distributed ITNs were hanging at 7-11 weeks compared to 63.1 % at 5-6 months. While a CHW hang-up visit was associated with increased usage at 7-11 weeks, this difference was no longer apparent at 5-6 months. This evaluation revealed that (1) the community point distributions achieved high attendance rates followed by acceptable rates of short-term and medium-term ITN retention and usage, as compared to reported rates achieved by door-to-door distributions in the recent past, (2) CHW hang-up visits had a modest short-term impact on ITN usage but no medium-term effect, and (3) community point distributions can yield sizeable time savings compared to door-to-door distributions.

  13. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  14. Self-Avoiding Walks on the Random Lattice and the Random Hopping Model on a Cayley Tree

    NASA Astrophysics Data System (ADS)

    Kim, Yup

    Using a field theoretic method based on the replica trick, it is proved that the three-parameter renormalization group for an n-vector model with quenched randomness reduces to a two-parameter one in the limit n (--->) 0 which corresponds to self-avoiding walks (SAWs). This is also shown by the explicit calculation of the renormalization group recursion relations to second order in (epsilon). From this reduction we find that SAWs on the random lattice are in the same universality class as SAWs on the regular lattice. By analogy with the case of the n-vector model with cubic anisotropy in the limit n (--->) 1, the fixed-point structure of the n-vector model with randomness is analyzed in the SAW limit, so that a physical interpretation of the unphysical fixed point is given. Corrections of the values of critical exponents of the unphysical fixed point published previously is also given. Next we formulate an integral equation and recursion relations for the configurationally averaged one particle Green's function of the random hopping model on a Cayley tree of coordination number ((sigma) + 1). This formalism is tested by applying it successfully to the nonrandom model. Using this scheme for 1 << (sigma) < (INFIN) we calculate the density of states of this model with a Gaussian distribution of hopping matrix elements in the range of energy E('2) > E(,c)('2), where E(,c) is a critical energy described below. The singularity in the Green's function which occurs at energy E(,1)('(0)) for (sigma) = (INFIN) is shifted to complex energy E(,1) (on the unphysical sheet of energy E) for small (sigma)('-1). This calculation shows that the density of states is smooth function of energy E around the critical energy E(,c) = Re E(,1) in accord with Wegner's theorem. In this formulation the density of states has no sharp phase transition on the real axis of E because E(,1) has developed an imaginary part. Using the Lifschitz argument, we calculate the density of states near the band edge for the model when the hopping matrix elements are governed by a bounded probability distribution. It is also shown within the dynamical system language that the density of states of the model with a bounded distribution never vanishes inside the band and we suggest a theoretical mechanism for the formation of energy bands.

  15. Consistent and powerful non-Euclidean graph-based change-point test with applications to segmenting random interfered video data.

    PubMed

    Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna

    2018-06-05

    The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.

  16. correlcalc: Two-point correlation function from redshift surveys

    NASA Astrophysics Data System (ADS)

    Rohin, Yeluripati

    2017-11-01

    correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.

  17. The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions

    NASA Astrophysics Data System (ADS)

    Tzioufas, Achillefs

    2018-04-01

    We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.

  18. The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions

    NASA Astrophysics Data System (ADS)

    Tzioufas, Achillefs

    2018-06-01

    We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.

  19. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  20. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.

  1. Localized surface plasmon enhanced cellular imaging using random metallic structures

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Lee, Wonju; Kim, Donghyun

    2017-02-01

    We have studied fluorescence cellular imaging with randomly distributed localized near-field induced by silver nano-islands. For the fabrication of nano-islands, a 10-nm silver thin film evaporated on a BK7 glass substrate with an adhesion layer of 2-nm thick chromium. Micrometer sized silver square pattern was defined using e-beam lithography and then the film was annealed at 200°C. Raw images were restored using electric field distribution produced on the surface of random nano-islands. Nano-islands were modeled from SEM images. 488-nm p-polarized light source was set to be incident at 60°. Simulation results show that localized electric fields were created among nano-islands and that their average size was found to be 135 nm. The feasibility was tested using conventional total internal reflection fluorescence microscopy while the angle of incidence was adjusted to maximize field enhancement. Mouse microphage cells were cultured on nano-islands, and actin filaments were selectively stained with FITC-conjugated phalloidin. Acquired images were deconvolved based on linear imaging theory, in which molecular distribution was sampled by randomly distributed localized near-field and blurred by point spread function of far-field optics. The optimum fluorophore distribution was probabilistically estimated by repetitively matching a raw image. The deconvolved images are estimated to have a resolution in the range of 100-150 nm largely determined by the size of localized near-fields. We also discuss and compare the results with images acquired with periodic nano-aperture arrays in various optical configurations to excite localized plasmonic fields and to produce super-resolved molecular images.

  2. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    NASA Astrophysics Data System (ADS)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  3. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  4. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  5. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  6. Scattering of electromagnetic waves from a half-space of randomly distributed discrete scatterers and polarized backscattering ratio law

    NASA Technical Reports Server (NTRS)

    Zhu, P. Y.

    1991-01-01

    The effective-medium approximation is applied to investigate scattering from a half-space of randomly and densely distributed discrete scatterers. Starting from vector wave equations, an approximation, called effective-medium Born approximation, a particular way, treating Green's functions, and special coordinates, of which the origin is set at the field point, are used to calculate the bistatic- and back-scatterings. An analytic solution of backscattering with closed form is obtained and it shows a depolarization effect. The theoretical results are in good agreement with the experimental measurements in the cases of snow, multi- and first-year sea-ice. The root product ratio of polarization to depolarization in backscattering is equal to 8; this result constitutes a law about polarized scattering phenomena in the nature.

  7. Congruent biogeographical disjunctions at a continent-wide scale: Quantifying and clarifying the role of biogeographic barriers in the Australian tropics.

    PubMed

    Edwards, Robert D; Crisp, Michael D; Cook, Dianne H; Cook, Lyn G

    2017-01-01

    To test whether novel and previously hypothesized biogeogaphic barriers in the Australian Tropics represent significant disjunction points or hard barriers, or both, to the distribution of plants. Australian tropics: Australian Monsoon Tropics and Australian Wet Tropics. The presence or absence of 6,861 plant species was scored across 13 putative biogeographic barriers in the Australian Tropics, including two that have not previously been recognised. Randomizations of these data were used to test whether more species showed disjunctions (gaps in distribution) or likely barriers (range limits) at these points than expected by chance. Two novel disjunctions in the Australian Tropics flora are identified in addition to eleven putative barriers previously recognized for animals. Of these, eleven disjunction points (all within the Australian Monsoon Tropics) were found to correspond to range-ending barriers to a significant number of species, while neither of the two disjunctions found within the Australian Wet Tropics limited a significant number of species' ranges. Biogeographic barriers present significant distributional limits to native plant species in the Australian Monsoon Tropics but not in the Australian Wet Tropics.

  8. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  9. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  10. EFFECTS OF THE INSECT JUVENILE HORMONE AGONIST, METHOPRENE, ON FEMALE GROWTH AND REPRODUCTION IN THE GULF SAND FIDDLER CRAB.

    EPA Science Inventory

    Adult Uca panacea were collected from the shores of West Bay Point near Panama City, Florida, in late April of 2000. These crabs (250 females, 100 males per pond) were distributed randomly into six specially constructed estuarine ponds to determine the effects of field applicatio...

  11. Multi-Scale Fracture Mechanics of 3-D Reinforced Composites

    DTIC Science & Technology

    2010-02-26

    cohesive energy over the interface between plies n and n+1, separated by the horizontal surface z= zn is w/ g(KB)ds (16) In this case the normal vector...where INP is the total number of integration points and V„ is the volume of the n-th ply. Note that the random distribution of initial strength ( 31

  12. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  13. Modeling fluid diffusion in cerebral white matter with random walks in complex environments

    NASA Astrophysics Data System (ADS)

    Levy, Amichai; Cwilich, Gabriel; Buldyrev, Sergey V.; Weeden, Van J.

    2012-02-01

    Recent studies with diffusion MRI have shown new aspects of geometric order in the brain, including complex path coherence within the cerebral cortex, and organization of cerebral white matter and connectivity across multiple scales. The main assumption of these studies is that water molecules diffuse along myelin sheaths of neuron axons in the white matter and thus the anisotropy of their diffusion tensor observed by MRI can provide information about the direction of the axons connecting different parts of the brain. We model the diffusion of particles confined in the space of between the bundles of cylindrical obstacles representing fibrous structures of various orientations. We have investigated the directional properties of the diffusion, by studying the angular distribution of the end point of the random walks as a function of their length, to understand the scale over which the distribution randomizes. We will show evidence of qualitative change in the behavior of the diffusion for different volume fractions of obstacles. Comparisons with three-dimensional MRI images will be illustrated.

  14. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  15. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  17. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.

  18. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT.

    PubMed

    Naqvi, Shahid A; D'Souza, Warren D

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.

  19. Connection between two statistical approaches for the modelling of particle velocity and concentration distributions in turbulent flow: The mesoscopic Eulerian formalism and the two-point probability density function method

    NASA Astrophysics Data System (ADS)

    Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre

    2006-12-01

    The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.

  20. Suppression of thermal frequency noise in erbium-doped fiber random lasers.

    PubMed

    Saxena, Bhavaye; Bao, Xiaoyi; Chen, Liang

    2014-02-15

    Frequency and intensity noise are characterized for erbium-doped fiber (EDF) random lasers based on Rayleigh distributed feedback mechanism. We propose a theoretical model for the frequency noise of such random lasers using the property of random phase modulations from multiple scattering points in ultralong fibers. We find that the Rayleigh feedback suppresses the noise at higher frequencies by introducing a Lorentzian envelope over the thermal frequency noise of a long fiber cavity. The theoretical model and measured frequency noise agree quantitatively with two fitting parameters. The random laser exhibits a noise level of 6  Hz²/Hz at 2 kHz, which is lower than what is found in conventional narrow-linewidth EDF fiber lasers and nonplanar ring laser oscillators (NPROs) by a factor of 166 and 2, respectively. The frequency noise has a minimum value for an optimum length of the Rayleigh scattering fiber.

  1. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  2. Spatial distribution of soil contaminated with Toxoplasma gondii oocysts in relation to the distribution and use of domestic cat defecation sites on dairy farms.

    PubMed

    Simon, J A; Kurdzielewicz, S; Jeanniot, E; Dupuis, E; Marnef, F; Aubert, D; Villena, I; Poulle, M-L

    2017-05-01

    Little information is available on the relationship between the spatial distribution of zoonotic parasites in soil and the pattern of hosts' faeces deposition at a local scale. In this study, the spatial distribution of soil contaminated by the parasite Toxoplasma gondii was investigated in relation to the distribution and use of the defecation sites of its definitive host, the domestic cat (Felis catus). The study was conducted on six dairy farms with a high number of cats (seven to 30 cats). During regular visits to the farms over a 10month period, the cat population and cat defecation sites (latrines and sites of scattered faeces) on each farm were systematically surveyed. During the last visit, 561 soil samples were collected from defecation sites and random points, and these samples were searched for T. gondii DNA using real-time quantitative PCR. Depending on the farm, T. gondii DNA was detected in 37.7-66.3% of the soil samples. The proportion of contaminated samples at a farm was positively correlated with the rate of new cat latrines replacing former cat latrines, suggesting that inconstancy in use of a latrine by cats affects the distribution of T. gondii in soil. On the farms, known cat defecation sites were significantly more often contaminated than random points, but 25-62.5% of the latter were also found to be contaminated. Lastly, the proportion of positive T. gondii samples in latrines was related to the proximity of the cats' main feeding and resting sites on the farms. This study demonstrates that T. gondii can be widely distributed in farm soil despite the heterogeneous distribution of cat faeces. This supports the hypothesis that farms are hotspot areas for the risk of T. gondii oocyst-induced infection in rural environments. Copyright © 2017 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  3. Theory of Random Copolymer Fractionation in Columns

    NASA Astrophysics Data System (ADS)

    Enders, Sabine

    Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.

  4. Self-consistent approach for neutral community models with speciation

    NASA Astrophysics Data System (ADS)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  5. Modeling a space-based quantum link that includes an adaptive optics system

    NASA Astrophysics Data System (ADS)

    Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.

    2017-10-01

    Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.

  6. Distribution of diameters for Erdős-Rényi random graphs.

    PubMed

    Hartmann, A K; Mézard, M

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.

  7. Distribution of diameters for Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Hartmann, A. K.; Mézard, M.

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.

  8. Quark model and strange baryon production in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    1998-12-01

    It is pointed out that the recent data on strange baryon and antibaryon production in Pb-Pb collisions at 159 GeV/c agree well with the hypothesis of an intermediate state of quasi-free and randomly distributed constituent quarks and antiquarks. Also the S-S data are consistent with this hypothesis. The p-Pb data follow a different pattern.

  9. Evaluation of a multi-point method for determining acoustic impedance

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Parrott, Tony L.

    1988-01-01

    An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.

  10. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  11. Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios

    2017-02-01

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations ( δB / B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points aremore » acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker–Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker and Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path ( λ {sub sc}) of the particles between the scatterers inside the energization volume.« less

  13. Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices

    NASA Astrophysics Data System (ADS)

    Vishnepolsky, Rachel

    A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.

  14. How we became the Schmams: conceptualizations of fairness in the decision-making process for Latina/o children.

    PubMed

    Langhout, Regina Day; Kohfeldt, Danielle M; Ellison, Erin Rose

    2011-12-01

    The current study examines 16 Latina/o fifth grade children's desires for a decision-making structure within a youth participatory action research (yPAR) program. When given the choices of consensus, majority rule, authoritarian rule, delegation, and random choice models, children chose random choice. Procedural, distributive and emotional justice were heavily weighted in their reasoning around fairness and decision making. Many thought random choice offered the best alternative because it flattened power hierarchies so that each child would, at some point, have the power to make a decision. Additionally, children argued that the neutrality of random choice allowed them to sidestep interpersonal tensions. Implications include how social identities inform definitions of fairness and how yPAR programs should work with youth around how they will make decisions.

  15. Time-dependent real space RG on the spin-1/2 XXZ chain

    NASA Astrophysics Data System (ADS)

    Mason, Peter; Zagoskin, Alexandre; Betouras, Joseph

    In order to measure the spread of information in a system of interacting fermions with nearest-neighbour couplings and strong bond disorder, one could utilise a dynamical real space renormalisation group (RG) approach on the spin-1/2 XXZ chain. Under such a procedure, a many-body localised state is established as an infinite randomness fixed point and the entropy scales with time as log(log(t)). One interesting further question that results from such a study is the case when the Hamiltonian explicitly depends on time. Here we answer this question by considering a dynamical renormalisation group treatment on the strongly disordered random spin-1/2 XXZ chain where the couplings are time-dependent and chosen to reflect a (slow) evolution of the governing Hamiltonian. Under the condition that the renormalisation process occurs at fixed time, a set of coupled second order, nonlinear PDE's can be written down in terms of the random distributions of the bonds and fields. Solution of these flow equations at the relevant critical fixed points leads us to establish the dynamics of the flow as we sweep through the quantum critical point of the Hamiltonian. We will present these critical flows as well as discussing the issues of duality, entropy and many-body localisation.

  16. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  17. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  18. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  19. National Emphysema Treatment Trial redux: accentuating the positive.

    PubMed

    Sanchez, Pablo Gerardo; Kucharczuk, John Charles; Su, Stacey; Kaiser, Larry Robert; Cooper, Joel David

    2010-09-01

    Under the Freedom of Information Act, we obtained the follow-up data of the National Emphysema Treatment Trial (NETT) to determine the long-term outcome for "a heterogeneous distribution of emphysema with upper lobe predominance," postulated by the NETT hypothesis to be optimal candidates for lung volume reduction surgery. Using the NETT database, we identified patients with heterogeneous distribution of emphysema with upper lobe predominance and analyzed for the first time follow-up data for those receiving lung volume reduction surgery and those receiving medical management. Furthermore, we compared the results of the NETT reduction surgery group with a previously reported consecutive case series of 250 patients undergoing bilateral lung volume reduction surgery using similar selection criteria. Of the 1218 patients enrolled, 511 (42%) conformed to the NETT hypothesis selection criteria and received the randomly assigned surgical or medical treatment (surgical = 261; medical = 250). Lung volume reduction surgery resulted in a 5-year survival benefit (70% vs 60%; P = .02). Results at 3 years compared with baseline data favored surgical reduction in terms of residual volume reduction (25% vs 2%; P < .001), University of California San Diego dyspnea score (16 vs 0 points; P < .001), and improved St George Respiratory Questionnaire quality of life score (12 points vs 0 points; P < .001). For the 513 patients with a homogeneous pattern of emphysema randomized to surgical or medical treatment, lung volume reduction surgery produced no survival advantage and very limited functional benefit. Patients most likely to benefit from lung volume reduction surgery have heterogeneously distributed emphysema involving the upper lung zones predominantly. Such patients in the NETT trial had results nearly identical to those previously reported in a nonrandomized series of similar patients undergoing lung volume reduction surgery. 2010 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  20. Consumers don’t play dice, influence of social networks and advertisements

    NASA Astrophysics Data System (ADS)

    Groot, Robert D.

    2006-05-01

    Empirical data of supermarket sales show stylised facts that are similar to stock markets, with a broad (truncated) Lévy distribution of weekly sales differences in the baseline sales [R.D. Groot, Physica A 353 (2005) 501]. To investigate the cause of this, the influence of social interactions and advertisements are studied in an agent-based model of consumers in a social network. The influence of network topology was varied by using a small-world network, a random network and a Barabási-Albert network. The degree to which consumers value the opinion of their peers was also varied. On a small-world and random network we find a phase transition between an open market and a locked-in market that is similar to condensation in liquids. At the critical point, fluctuations become large and buying behaviour is strongly correlated. However, on the small world network the noise distribution at the critical point is Gaussian, and critical slowing down occurs which is not observed in supermarket sales. On a scale-free network, the model shows a transition between a gas-like phase and a glassy state, but at the transition point the noise amplitude is much larger than what is seen in supermarket sales. To explore the role of advertisements, a model is studied where imprints are placed on the minds of consumers that ripen when a decision for a product is made. The correct distribution of weekly sales returns follows naturally from this model, as well as the noise amplitude, the correlation time and cross-correlation of sales fluctuations. For particular parameter values, simulated sales correlation shows power-law decay in time. The model predicts that social interaction helps to prevent aversion, and that products are viewed more positively when their consumption rate is higher.

  1. Topographical effects on the distributions of rainfall and 18O distributions: a case in Miyake Island, Japan

    NASA Astrophysics Data System (ADS)

    Tang, Changyuan; Shindo, Shizuo; Machida, Isao

    1998-03-01

    In this paper, we try to calculate precipitation in Miyake Island, Japan. In order to know the temporal and spatial variations of precipitation, we have set 15 rain gauges randomly in the island to collect the monthly precipitation data since June 1994. It is found that the precipitation is very different from point to point. First, we used statistical methods to get the correlations between the monthly precipitation at our survey points and that at the weather station. Next, regression analyses were used to establish formulae to calculate precipitation as a function of altitude, aspect of the geomorphological surface and wind direction. Based on these results, distributions of monthly and yearly precipitation and 18O over the island were assessed. The results show that landscape patterns strongly influence precipitation distribution over the island, with the highest precipitation being found on the windward side, about 400-600 m above sea level. Even at places at the same altitude, the precipitation was different because of the aspect of the landscape. At the same time, altitude effects for 18O on both the windward and leeward sides were -0·10/100 m and -0·15/100 m, respectively. Comparing with the distribution of precipitation distribution, it was also found that 18O for the windward and leeward sides was different from that for precipitation, which means that both topographical effects must be considered separately.

  2. Human performance on the traveling salesman problem.

    PubMed

    MacGregor, J N; Ormerod, T

    1996-05-01

    Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects' solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.

  3. Chromatin Folding, Fragile Sites, and Chromosome Aberrations Induced by Low- and High- LET Radiation

    NASA Technical Reports Server (NTRS)

    Zhang, Ye; Cox, Bradley; Asaithamby, Aroumougame; Chen, David J.; Wu, Honglu

    2013-01-01

    We previously demonstrated non-random distributions of breaks involved in chromosome aberrations induced by low- and high-LET radiation. To investigate the factors contributing to the break point distribution in radiation-induced chromosome aberrations, human epithelial cells were fixed in G1 phase. Interphase chromosomes were hybridized with a multicolor banding in situ hybridization (mBAND) probe for chromosome 3 which distinguishes six regions of the chromosome in separate colors. After the images were captured with a laser scanning confocal microscope, the 3-dimensional structure of interphase chromosome 3 was reconstructed at multimega base pair scale. Specific locations of the chromosome, in interphase, were also analyzed with bacterial artificial chromosome (BAC) probes. Both mBAND and BAC studies revealed non-random folding of chromatin in interphase, and suggested association of interphase chromatin folding to the radiation-induced chromosome aberration hotspots. We further investigated the distribution of genes, as well as the distribution of breaks found in tumor cells. Comparisons of these distributions to the radiation hotspots showed that some of the radiation hotspots coincide with the frequent breaks found in solid tumors and with the fragile sites for other environmental toxins. Our results suggest that multiple factors, including the chromatin structure and the gene distribution, can contribute to radiation-induced chromosome aberrations.

  4. A Participatory Randomized Controlled Trial in Knowledge Translation (KT) to Promote the Adoption of Self-Monitoring of Blood Glucose for Type 2 Diabetes Mellitus Patients in An Urban District of Thailand.

    PubMed

    Suriyawongpaisal, Paibul; Tansirisithikul, Rassamee; Sakulpipat, Thida; Charoensuk, Phikul; Aekplakorn, Wichai

    2016-02-01

    To examine effectiveness of self-monitoring of blood glucose (SMBG) in glycemic control for poor control diabetes patients, and test whether the glycemic outcome for those with the 7-point SMBG was better than those with 5-point SMBG or usual care. Randomized-controlled trial (RCT) of patients with type 2 diabetes mellitus aged 30 years or older HbA1c > 7. Patients were randomly allocated to one of three groups; 7-point SMBG 5-point SBMG and control group. Differences in HbA1c at 6 months and baseline were compared among groups. A total of 191 patients with poor control of diabetes were included. Compared with baseline, at 6 months, average change in HbAlc among control, 7-point, and 5-point SMBG were -0.38, -0.87, and -0.99 (p = 0.04), respectively. The corresponding percentages of patients with reduced HbA1c were 57.1%, 77.6% and 75.5%, respectively (p = 0.03). Using different cut-off values for HbA1c (< 7 and < 7.5) resulted in different percentage distribution of T2DM patients among the 3 groups, yet the differences were not statistically significant. Reductions in body weight were observed in both SMBG groups but not in the control group. Using RCT on participatory basis, SMBG with individual dietary counseling was effective in short term. Further engagement with the provider team, the patients/care takers and the health care financing agency to integrate SMBG in the care protocol for poor control diabetes should be considered.

  5. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  6. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  7. Hyperuniformity Length in Experimental Foam and Simulated Point Patterns

    NASA Astrophysics Data System (ADS)

    Chieco, Anthony; Roth, Adam; Dreyfus, Remi; Torquato, Salvatore; Durian, Douglas

    2015-03-01

    Systems without long-wavelength number density fluctuations are called hyperuniform (HU). The degree to which a point pattern is HU may be tested in terms of the variance in the number of points inside randomly placed boxes of side length L. If HU then the variance is due solely to fluctuations near the boundary rather than throughout the entire volume of the box. To make this concrete we introduce a hyperuniformity length h, equal to the width of the boundary where number fluctuations occur. Thus h helps characterize the disorder. We show how to deduce h from the number variance, and we do so for Poisson and Einstein patterns plus those made by the vertices and bubble centroids in 2d foams. A Poisson pattern is one where points are totally random. These are not HU and h equals L/2. We coin ``Einstein patterns'' to be where points in a lattice are independently displaced from their site by a normally distributed amount. These are HU and h equals the RMS displacement from the lattice sites. Bubble centroids and vertices are both HU. For these, h is less than L/2 and increases slower than linear in L. The centroids are more HU than the vertices, in that h that increases more slowly.

  8. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  9. Congruent biogeographical disjunctions at a continent-wide scale: Quantifying and clarifying the role of biogeographic barriers in the Australian tropics

    PubMed Central

    Crisp, Michael D.; Cook, Dianne H.; Cook, Lyn G.

    2017-01-01

    Aim To test whether novel and previously hypothesized biogeogaphic barriers in the Australian Tropics represent significant disjunction points or hard barriers, or both, to the distribution of plants. Location Australian tropics: Australian Monsoon Tropics and Australian Wet Tropics. Methods The presence or absence of 6,861 plant species was scored across 13 putative biogeographic barriers in the Australian Tropics, including two that have not previously been recognised. Randomizations of these data were used to test whether more species showed disjunctions (gaps in distribution) or likely barriers (range limits) at these points than expected by chance. Results Two novel disjunctions in the Australian Tropics flora are identified in addition to eleven putative barriers previously recognized for animals. Of these, eleven disjunction points (all within the Australian Monsoon Tropics) were found to correspond to range-ending barriers to a significant number of species, while neither of the two disjunctions found within the Australian Wet Tropics limited a significant number of species’ ranges. Main conclusions Biogeographic barriers present significant distributional limits to native plant species in the Australian Monsoon Tropics but not in the Australian Wet Tropics. PMID:28376094

  10. Fast Algorithms for Estimating Mixture Parameters

    DTIC Science & Technology

    1989-08-30

    The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National

  11. North American vegetation model for land-use planning in a changing climate: A solution to large classification problems

    Treesearch

    Gerald E. Rehfeldt; Nicholas L. Crookston; Cuauhtemoc Saenz-Romero; Elizabeth M. Campbell

    2012-01-01

    Data points intensively sampling 46 North American biomes were used to predict the geographic distribution of biomes from climate variables using the Random Forests classification tree. Techniques were incorporated to accommodate a large number of classes and to predict the future occurrence of climates beyond the contemporary climatic range of the biomes. Errors of...

  12. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  13. Local approximation of a metapopulation's equilibrium.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  14. Stochastic transport in the presence of spatial disorder: Fluctuation-induced corrections to homogenization

    NASA Astrophysics Data System (ADS)

    Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias

    2016-10-01

    Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.

  15. Binaural Simulation Experiments in the NASA Langley Structural Acoustics Loads and Transmission Facility

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Silcox, Richard (Technical Monitor)

    2001-01-01

    A location and positioning system was developed and implemented in the anechoic chamber of the Structural Acoustics Loads and Transmission (SALT) facility to accurately determine the coordinates of points in three-dimensional space. Transfer functions were measured between a shaker source at two different panel locations and the vibrational response distributed over the panel surface using a scanning laser vibrometer. The binaural simulation test matrix included test runs for several locations of the measuring microphones, various attitudes of the mannequin, two locations of the shaker excitation and three different shaker inputs including pulse, broadband random, and pseudo-random. Transfer functions, auto spectra, and coherence functions were acquired for the pseudo-random excitation. Time histories were acquired for the pulse and broadband random input to the shaker. The tests were repeated with a reflective surface installed. Binary data files were converted to universal format and archived on compact disk.

  16. Impact Assessment of Mikania Micrantha on Land Cover and Maxent Modeling to Predict its Potential Invasion Sites

    NASA Astrophysics Data System (ADS)

    Baidar, T.; Shrestha, A. B.; Ranjit, R.; Adhikari, R.; Ghimire, S.; Shrestha, N.

    2017-05-01

    Mikania micrantha is one of the major invasive alien plant species in tropical moist forest regions of Asia including Nepal. Recently, this weed is spreading at an alarming rate in Chitwan National Park (CNP) and threatening biodiversity. This paper aims to assess the impacts of Mikania micrantha on different land cover and to predict potential invasion sites in CNP using Maxent model. Primary data for this were presence point coordinates and perceived Mikania micrantha cover collected through systematic random sampling technique. Rapideye image, Shuttle Radar Topographic Mission data and bioclimatic variables were acquired as secondary data. Mikania micrantha distribution maps were prepared by overlaying the presence points on image classified by object based image analysis. The overall accuracy of classification was 90 % with Kappa coefficient 0.848. A table depicting the number of sample points in each land cover with respective Mikania micrantha coverage was extracted from the distribution maps to show the impact. The riverine forest was found to be the most affected land cover with 85.98 % presence points and sal forest was found to be very less affected with only 17.02 % presence points. Maxent modeling predicted the areas near the river valley as the potential invasion sites with statistically significant Area Under the Receiver Operating Curve (AUC) value of 0.969. Maximum temperature of warmest month and annual precipitation were identified as the predictor variables that contribute the most to Mikania micrantha's potential distribution.

  17. Collisional evolution of rotating, non-identical particles. [in Saturn rings

    NASA Technical Reports Server (NTRS)

    Salo, H.

    1987-01-01

    Hameen-Anttila's (1984) theory of self-gravitating collisional particle disks is extended to include the effects of particle spin. Equations are derived for the coupled evolution of random velocities and spins, showing that friction and surface irregularity both reduce the local velocity dispersion and transfer significant amounts of random kinetic energy to rotational energy. Results for the equilibrium ratio of rotational energy to random kinetic energy are exact not only for identical nongravitating mass points, but also if finite size, self-gravitating forces, or size distribution are included. The model is applied to the dynamics of Saturn's rings, showing that the inclusion of rotation reduces the geometrical thickness of the layer of cm-sized particles to, at most, about one-half, with large particles being less affected.

  18. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  19. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  20. The First Order Correction to the Exit Distribution for Some Random Walks

    NASA Astrophysics Data System (ADS)

    Kennedy, Tom

    2016-07-01

    We study three different random walk models on several two-dimensional lattices by Monte Carlo simulations. One is the usual nearest neighbor random walk. Another is the nearest neighbor random walk which is not allowed to backtrack. The final model is the smart kinetic walk. For all three of these models the distribution of the point where the walk exits a simply connected domain D in the plane converges weakly to harmonic measure on partial D as the lattice spacing δ → 0. Let ω (0,\\cdot ;D) be harmonic measure for D, and let ω _δ (0,\\cdot ;D) be the discrete harmonic measure for one of the random walk models. Our definition of the random walk models is unusual in that we average over the orientation of the lattice with respect to the domain. We are interested in the limit of (ω _δ (0,\\cdot ;D)- ω (0,\\cdot ;D))/δ . Our Monte Carlo simulations of the three models lead to the conjecture that this limit equals c_{M,L} ρ _D(z) times Lebesgue measure with respect to arc length along the boundary, where the function ρ _D(z) depends on the domain, but not on the model or lattice, and the constant c_{M,L} depends on the model and on the lattice, but not on the domain. So there is a form of universality for this first order correction. We also give an explicit formula for the conjectured density ρ _D.

  1. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  2. Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs

    NASA Astrophysics Data System (ADS)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2016-11-01

    We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.

  3. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  4. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  5. Forest fire spatial pattern analysis in Galicia (NW Spain).

    PubMed

    Fuentes-Santos, I; Marey-Pérez, M F; González-Manteiga, W

    2013-10-15

    Knowledge of fire behaviour is of key importance in forest management. In the present study, we analysed the spatial structure of forest fire with spatial point pattern analysis and inference techniques recently developed in the Spatstat package of R. Wildfires have been the primary threat to Galician forests in recent years. The district of Fonsagrada-Ancares is one of the most seriously affected by fire in the region and, therefore, the central focus of the study. Our main goal was to determine the spatial distribution of ignition points to model and predict fire occurrence. These data are of great value in establishing enhanced fire prevention and fire fighting plans. We found that the spatial distribution of wildfires is not random and that fire occurrence may depend on ownership conflicts. We also found positive interaction between small and large fires and spatial independence between wildfires in consecutive years. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Electron Waiting Times in Mesoscopic Conductors

    NASA Astrophysics Data System (ADS)

    Albert, Mathias; Haack, Géraldine; Flindt, Christian; Büttiker, Markus

    2012-05-01

    Electron transport in mesoscopic conductors has traditionally involved investigations of the mean current and the fluctuations of the current. A complementary view on charge transport is provided by the distribution of waiting times between charge carriers, but a proper theoretical framework for coherent electronic systems has so far been lacking. Here we develop a quantum theory of electron waiting times in mesoscopic conductors expressed by a compact determinant formula. We illustrate our methodology by calculating the waiting time distribution for a quantum point contact and find a crossover from Wigner-Dyson statistics at full transmission to Poisson statistics close to pinch-off. Even when the low-frequency transport is noiseless, the electrons are not equally spaced in time due to their inherent wave nature. We discuss the implications for renewal theory in mesoscopic systems and point out several analogies with level spacing statistics and random matrix theory.

  7. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  8. Effect diffraction on a viewed object has on improvement of object optical image quality in a turbulent medium

    NASA Astrophysics Data System (ADS)

    Banakh, Viktor A.; Sazanovich, Valentina M.; Tsvik, Ruvim S.

    1997-09-01

    The influence of diffraction on the object, coherently illuminated and viewed through a random medium from the same point, on the image quality betterment caused by the counter wave correlation is studied experimentally. The measurements were carried out with the use of setup modeling artificial convective turbulence. It is shown that in the case of spatially limited reflector with the Fresnel number of the reflector surface radius r ranging from 3 to 12 the contribution of the counter wave correlation into image intensity distribution is maximal as compared with the point objects (r U.

  9. Computer simulation of the probability that endangered whales will interact with oil spills, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Jayko, K.; Bowles, A.

    1986-10-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.

  10. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  11. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  12. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  13. The Degree of Adherence to Educational Values by the Students of the University of Jordan--In Their Point of View

    ERIC Educational Resources Information Center

    Al-Serhan, Khaled Ali

    2016-01-01

    The study aimed to probe the degree of adherence by students of the University of Jordan to moral values. To achieve this, a survey was prepared and distributed to a random sample of 1769 students in the second semester of the 2013-2014 school year. The study showed a high degree of adherence to religious, social and behavioral values, and a…

  14. Neutral Evolution of Duplicated DNA: An Evolutionary Stick-Breaking Process Causes Scale-Invariant Behavior

    NASA Astrophysics Data System (ADS)

    Massip, Florian; Arndt, Peter F.

    2013-04-01

    Recently, an enrichment of identical matching sequences has been found in many eukaryotic genomes. Their length distribution exhibits a power law tail raising the question of what evolutionary mechanism or functional constraints would be able to shape this distribution. Here we introduce a simple and evolutionarily neutral model, which involves only point mutations and segmental duplications, and produces the same statistical features as observed for genomic data. Further, we extend a mathematical model for random stick breaking to analytically show that the exponent of the power law tail is -3 and universal as it does not depend on the microscopic details of the model.

  15. Bounds on the conductivity of a suspension of random impenetrable spheres

    NASA Astrophysics Data System (ADS)

    Beasley, J. D.; Torquato, S.

    1986-11-01

    We compare the general Beran bounds on the effective electrical conductivity of a two-phase composite to the bounds derived by Torquato for the specific model of spheres distributed throughout a matrix phase. For the case of impenetrable spheres, these bounds are shown to be identical and to depend on the microstructure through the sphere volume fraction φ2 and a three-point parameter ζ2, which is an integral over a three-point correlation function. We evaluate ζ2 exactly through third order in φ2 for distributions of impenetrable spheres. This expansion is compared to the analogous results of Felderhof and of Torquato and Lado, all of whom employed the superposition approximation for the three-particle distribution function involved in ζ2. The results indicate that the exact ζ2 will be greater than the value calculated under the superposition approximation. For reasons of mathematical analogy, the results obtained here apply as well to the determination of the thermal conductivity, dielectric constant, and magnetic permeability of composite media and the diffusion coefficient of porous media.

  16. Accuracy of Reaction Cross Section for Exotic Nuclei in Glauber Model Based on MCMC Diagnostics

    NASA Astrophysics Data System (ADS)

    Rueter, Keiti; Novikov, Ivan

    2017-01-01

    Parameters of a nuclear density distribution for an exotic nuclei with halo or skin structures can be determined from the experimentally measured reaction cross-section. In the presented work, to extract parameters such as nuclear size information for a halo and core, we compare experimental data on reaction cross-sections with values obtained using expressions of the Glauber Model. These calculations are performed using a Markov Chain Monte Carlo algorithm. We discuss the accuracy of the Monte Carlo approach and its dependence on k*, the power law turnover point in the discreet power spectrum of the random number sequence and on the lag-1 autocorrelation time of the random number sequence.

  17. A multi-assets artificial stock market with zero-intelligence traders

    NASA Astrophysics Data System (ADS)

    Ponta, L.; Raberto, M.; Cincotti, S.

    2011-01-01

    In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.

  18. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  19. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  20. Anisotropy in Fracking: A Percolation Model for Observed Microseismicity

    NASA Astrophysics Data System (ADS)

    Norris, J. Quinn; Turcotte, Donald L.; Rundle, John B.

    2015-01-01

    Hydraulic fracturing (fracking), using high pressures and a low viscosity fluid, allow the extraction of large quantiles of oil and gas from very low permeability shale formations. The initial production of oil and gas at depth leads to high pressures and an extensive distribution of natural fractures which reduce the pressures. With time these fractures heal, sealing the remaining oil and gas in place. High volume fracking opens the healed fractures allowing the oil and gas to flow to horizontal production wells. We model the injection process using invasion percolation. We use a 2D square lattice of bonds to model the sealed natural fractures. The bonds are assigned random strengths and the fluid, injected at a point, opens the weakest bond adjacent to the growing cluster of opened bonds. Our model exhibits burst dynamics in which the clusters extend rapidly into regions with weak bonds. We associate these bursts with the microseismic activity generated by fracking injections. A principal object of this paper is to study the role of anisotropic stress distributions. Bonds in the y-direction are assigned higher random strengths than bonds in the x-direction. We illustrate the spatial distribution of clusters and the spatial distribution of bursts (small earthquakes) for several degrees of anisotropy. The results are compared with observed distributions of microseismicity in a fracking injection. Both our bursts and the observed microseismicity satisfy Gutenberg-Richter frequency-size statistics.

  1. Necessary detection efficiencies for secure quantum key distribution and bound randomness

    NASA Astrophysics Data System (ADS)

    Acín, Antonio; Cavalcanti, Daniel; Passaro, Elsa; Pironio, Stefano; Skrzypczyk, Paul

    2016-01-01

    In recent years, several hacking attacks have broken the security of quantum cryptography implementations by exploiting the presence of losses and the ability of the eavesdropper to tune detection efficiencies. We present a simple attack of this form that applies to any protocol in which the key is constructed from the results of untrusted measurements performed on particles coming from an insecure source or channel. Because of its generality, the attack applies to a large class of protocols, from standard prepare-and-measure to device-independent schemes. Our attack gives bounds on the critical detection efficiencies necessary for secure quantum key distribution, which show that the implementation of most partly device-independent solutions is, from the point of view of detection efficiency, almost as demanding as fully device-independent ones. We also show how our attack implies the existence of a form of bound randomness, namely nonlocal correlations in which a nonsignalling eavesdropper can find out a posteriori the result of any implemented measurement.

  2. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  3. Stochastic modelling of animal movement.

    PubMed

    Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M

    2010-07-27

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.

  4. Population pharmacokinetics of valnemulin in swine.

    PubMed

    Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H

    2014-02-01

    This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.

  5. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  6. Simulation of foulant bioparticle topography based on Gaussian process and its implications for interface behavior research

    NASA Astrophysics Data System (ADS)

    Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang

    2018-03-01

    Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.

  7. A Survey of Mathematical Programming in the Soviet Union (Bibliography),

    DTIC Science & Technology

    1982-01-01

    ASTAFYEV, N. N., "METHOD OF LINEARIZATION IN CONVEX PROGRAMMING", TR4- Y ZIMN SHKOLY PO MAT PROGRAMMIR I XMEZHN VOPR DROGOBYCH, 72, VOL. 3, 54-73 2...AKADEMIYA KOMMUNLN’NOGO KHOZYAYSTVA (MOSCOW), 72, NO. 93, 70-77 19. GIMELFARB , G, V. MARCHENKO, V. RYBAK, "AUTOMATIC IDENTIFICATION OF IDENTICAL POINTS...DYNAMIC PROGRAMMING (CONTINUED) 25. KOLOSOV, G. Y , "ON ANALYTICAL SOLUTION OF DESIGN PROBLEMS FOR DISTRIBUTED OPTIMAL CONTROL SYSTEMS SUBJECTED TO RANDOM

  8. Reconfigurable Antenna Aperture with Optically Controlled GeTe-Based RF Switches

    DTIC Science & Technology

    2015-03-31

    duration (~100ns) but high amplitude raises the material’s temperature above the melting point . As a liquid, the atoms are randomly distributed...100ns, there is sufficient optical energy to heat and melt a 100nm thick GeTe PCM area of approximately 3µm 2 . Figure 3. Optimum PCM area...which tracks well with previously published thin film heater model [9]. Figure 4. Validation of Melt /Quench Thermal Model Optical Control: The

  9. OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation

    NASA Astrophysics Data System (ADS)

    Empereur-Mot, Luc; Villemin, Thierry

    2003-03-01

    Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.

  10. The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes

    PubMed Central

    Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi

    2011-01-01

    We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596

  11. The influence of plan modulation on the interplay effect in VMAT liver SBRT treatments.

    PubMed

    Hubley, Emily; Pierce, Greg

    2017-08-01

    Volumetric modulated arc therapy (VMAT) uses multileaf collimator (MLC) leaves, gantry speed, and dose rate to modulate beam fluence, producing the highly conformal doses required for liver radiotherapy. When targets that move with respiration are treated with a dynamic fluence, there exists the possibility for interplay between the target and leaf motions. This study employs a novel motion simulation technique to determine if VMAT liver SBRT plans with an increase in MLC leaf modulation are more susceptible to dosimetric differences in the GTV due to interplay effects. For ten liver SBRT patients, two VMAT plans with different amounts of MLC leaf modulation were created. Motion was simulated using a random starting point in the respiratory cycle for each fraction. To isolate the interplay effect, motion was also simulated using four specific starting points in the respiratory cycle. The dosimetric differences caused by different starting points were examined by subtracting resultant dose distributions from each other. When motion was simulated using random starting points for each fraction, or with specific starting points, there were significantly more dose differences in the GTV (maximum 100cGy) for more highly modulated plans, but the overall plan quality was not adversely affected. Plans with more MLC leaf modulation are more susceptible to interplay effects, but dose differences in the GTV are clinically negligible in magnitude. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  13. Site characteristics and prey abundance at foraging sites used by Lesser Scaup (Aythya affinis) wintering in Florida

    USGS Publications Warehouse

    Herring, Garth; Collazo, Jaime

    2009-01-01

    We examined site characteristics and prey abundances where wintering Aythya affinis (Lesser Scaup; hereafter scaup) foraged within three regions of the Indian River Lagoon system in central Florida. We observed that scaup concentrated in the Indian and Banana rivers; however, density of prey items did not differ between foraging sites and random sites. We also found that site characteristics were similar between foraging and random sites. Differences in site characteristics between random points across all three regions did not explain the distribution of Foraging scaup (no scaup foraged in the Mosquito Lagoon); however, prey densities were approximately 3 times lower in the Mosquito Lagoon region. Our study suggests that current habitat conditions within the northern Indian River Lagoon system meet the overwintering requirements of scaup; however, prey densities in the Mosquito Lagoon may have been too low to be profitable for foraging scaup during the period of our study.

  14. The two-point correlation function for groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1990-01-01

    The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.

  15. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  16. Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.

    2008-11-01

    It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.

  17. On Pfaffian Random Point Fields

    NASA Astrophysics Data System (ADS)

    Kargin, V.

    2014-02-01

    We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.

  18. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Comment on Pisarenko et al., "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2016-02-01

    In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.

  20. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  1. Social influence in small-world networks

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Mao, Xiao-Ming; Ouyang, Qi

    2002-12-01

    We report on our numerical studies of the Axelrod model for social influence in small-world networks. Our simulation results show that the topology of the network has a crucial effect on the evolution of cultures. As the randomness of the network increases, the system undergoes a transition from a highly fragmented phase to a uniform phase. We also find that the power-law distribution at the transition point, reported by Castellano et al, is not a critical phenomenon; it exists not only at the onset of transition but also for almost any control parameters. All these power-law distributions are stable against perturbations. A mean-field theory is developed to explain these phenomena.

  2. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  3. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  4. Training symmetry of weight distribution after stroke: a randomized controlled pilot study comparing task-related reach, Bobath and feedback training approaches.

    PubMed

    Mudie, M H; Winzeler-Mercay, U; Radwan, S; Lee, L

    2002-09-01

    To determine (1) the most effective of three treatment approaches to retrain seated weight distribution long-term after stroke and (2) whether improvements could be generalized to weight distribution in standing. Inpatient rehabilitation unit. Forty asymmetrical acute stroke subjects were randomly allocated to one of four groups in this pilot study. Changes in weight distribution were compared between the 10 subjects of each of three treatment groups (task-specific reach, Bobath, or Balance Performance Monitor [BPM] feedback training) and a no specific treatment control group. One week of measurement only was followed by two weeks of daily training sessions with the treatment to which the subject was randomly allocated. Measurements were performed using the BPM daily before treatment sessions, two weeks after cessation of treatment and 12 weeks post study. Weight distribution was calculated in terms of mean balance (percentage of total body weight) or the mean of 300 balance points over a 30-s data run. In the short term, the Bobath approach was the most effective treatment for retraining sitting symmetry after stroke (p = 0.004). Training with the BPM and no training were also significant (p = 0.038 and p = 0.035 respectively) and task-specific reach training failed to reach significance (p = 0.26). At 12 weeks post study 83% of the BPM training group, 38% of the task-specific reach group, 29% of the Bobath group and 0% of the untrained group were found to be distributing their weight to both sides. Some generalization of symmetry training in sitting to standing was noted in the BPM training group which appeared to persist long term. Results should be treated with caution due to the small group sizes. However, these preliminary findings suggest that it might be possible to restore postural symmetry in sitting in the early stages of rehabilitation with therapy that focuses on creating an awareness of body position.

  5. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  6. Hybrid phase transition into an absorbing state: Percolation and avalanches

    NASA Astrophysics Data System (ADS)

    Lee, Deokjae; Choi, S.; Stippinger, M.; Kertész, J.; Kahng, B.

    2016-04-01

    Interdependent networks are more fragile under random attacks than simplex networks, because interlayer dependencies lead to cascading failures and finally to a sudden collapse. This is a hybrid phase transition (HPT), meaning that at the transition point the order parameter has a jump but there are also critical phenomena related to it. Here we study these phenomena on the Erdős-Rényi and the two-dimensional interdependent networks and show that the hybrid percolation transition exhibits two kinds of critical behaviors: divergence of the fluctuations of the order parameter and power-law size distribution of finite avalanches at a transition point. At the transition point global or "infinite" avalanches occur, while the finite ones have a power law size distribution; thus the avalanche statistics also has the nature of a HPT. The exponent βm of the order parameter is 1 /2 under general conditions, while the value of the exponent γm characterizing the fluctuations of the order parameter depends on the system. The critical behavior of the finite avalanches can be described by another set of exponents, βa and γa. These two critical behaviors are coupled by a scaling law: 1 -βm=γa .

  7. Availability and mean time between failures of redundant systems with random maintenance of subsystems

    NASA Technical Reports Server (NTRS)

    Schneeweiss, W.

    1977-01-01

    It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.

  8. Collective purchase behavior toward retail price changes

    NASA Astrophysics Data System (ADS)

    Ueno, Hiromichi; Watanabe, Tsutomu; Takayasu, Hideki; Takayasu, Misako

    2011-02-01

    By analyzing a huge amount of point-of-sale data collected from Japanese supermarkets, we find power law relationships between price and sales numbers. The estimated values of the exponents of these power laws depend on the category of products; however, they are independent of the stores, thereby implying the existence of universal human purchase behavior. The rate of sales numbers around these power laws are generally approximated by log-normal distributions implying that there are hidden random parameters, which might proportionally affect the purchase activity.

  9. CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amos, D.E.

    1977-04-01

    A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.

  10. Efficacy and Safety of Cannabidiol and Tetrahydrocannabivarin on Glycemic and Lipid Parameters in Patients With Type 2 Diabetes: A Randomized, Double-Blind, Placebo-Controlled, Parallel Group Pilot Study.

    PubMed

    Jadoon, Khalid A; Ratcliffe, Stuart H; Barrett, David A; Thomas, E Louise; Stott, Colin; Bell, Jimmy D; O'Sullivan, Saoirse E; Tan, Garry D

    2016-10-01

    Cannabidiol (CBD) and Δ(9)-tetrahydrocannabivarin (THCV) are nonpsychoactive phytocannabinoids affecting lipid and glucose metabolism in animal models. This study set out to examine the effects of these compounds in patients with type 2 diabetes. In this randomized, double-blind, placebo-controlled study, 62 subjects with noninsulin-treated type 2 diabetes were randomized to five treatment arms: CBD (100 mg twice daily), THCV (5 mg twice daily), 1:1 ratio of CBD and THCV (5 mg/5 mg, twice daily), 20:1 ratio of CBD and THCV (100 mg/5 mg, twice daily), or matched placebo for 13 weeks. The primary end point was a change in HDL-cholesterol concentrations from baseline. Secondary/tertiary end points included changes in glycemic control, lipid profile, insulin sensitivity, body weight, liver triglyceride content, adipose tissue distribution, appetite, markers of inflammation, markers of vascular function, gut hormones, circulating endocannabinoids, and adipokine concentrations. Safety and tolerability end points were also evaluated. Compared with placebo, THCV significantly decreased fasting plasma glucose (estimated treatment difference [ETD] = -1.2 mmol/L; P < 0.05) and improved pancreatic β-cell function (HOMA2 β-cell function [ETD = -44.51 points; P < 0.01]), adiponectin (ETD = -5.9 × 10(6) pg/mL; P < 0.01), and apolipoprotein A (ETD = -6.02 μmol/L; P < 0.05), although plasma HDL was unaffected. Compared with baseline (but not placebo), CBD decreased resistin (-898 pg/ml; P < 0.05) and increased glucose-dependent insulinotropic peptide (21.9 pg/ml; P < 0.05). None of the combination treatments had a significant impact on end points. CBD and THCV were well tolerated. THCV could represent a new therapeutic agent in glycemic control in subjects with type 2 diabetes. © 2016 by the American Diabetes Association.

  11. A demonstration area for type 2 diabetes prevention in Barranquilla and Juan Mina (Colombia): Baseline characteristics of the study participants.

    PubMed

    Acosta, Tania; Barengo, Noël C; Arrieta, Astrid; Ricaurte, Carlos; Tuomilehto, Jaakko O

    2018-01-01

    Type 2 diabetes (T2D) imposes a heavy public health burden in both developed and developing countries. It is necessary to understand the effect of T2D in different settings and population groups. This report aimed to present baseline characteristics of study participants in the demonstration area for the "Type 2 Diabetes Prevention in Barranquilla and Juan Mina" (DEMOJUAN) project after randomization and to compare their fasting and 2-hour glucose levels according to lifestyle and T2D risk factor levels.The DEMOJUAN project is a randomized controlled field trial. Study participants were recruited from study sites using population-wide screening using the Finnish Diabetes Risk Score (FINDRISC) questionnaire. All volunteers with FINDRISC of ≥13 points were invited to undergo an oral glucose tolerance test (OGTT). Participant inclusion criteria for the upcoming field trial were either FINDRISC of ≥13 points and 2-hour post-challenge glucose level of 7.0 to 11.0 mmol/L or FINDRISC of ≥13 points and fasting plasma glucose level of 6.1 to 6.9 mmol/L. Lifestyle habits and risk factors for T2D were assessed by trained interviewers using a validated questionnaire.Among the 14,193 participants who completed the FINDRISC questionnaire, 35% (n = 4915) had a FINDRISC score of ≥13 points and 47% (n = 2306) agreed to undergo the OGTT. Approximately, 33% (n = 772) of participants underwent the OGTT and met the entry criteria; these participants were randomized into 3 groups. There were no statistically significant differences found in anthropometric or lifestyle risk factors, distribution of the glucose metabolism categories, or other diabetes risk factors between the 3 groups (P > .05). Women with a past history of hyperglycaemia had significantly higher fasting glucose levels than those without previous hyperglycaemia (103 vs 99 mg/dL; P < .05).Lifestyle habits and risk factors were evenly distributed among the 3 study groups. No differences were found in fasting or 2-hour glucose levels among different lifestyle or risk factor categories with the exception of body mass index, past history of hyperglycaemia, and age of ≥64 years in women. NCT01296100 (2/12/2011; Clinical trials.gov). Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  12. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  13. Mathematical Model of Heat Transfer in the Catalyst Granule with Point Reaction Centers

    NASA Astrophysics Data System (ADS)

    Derevich, I. V.; Fokina, A. Yu.

    2018-01-01

    This paper considers a catalyst granule with a porous ceramic chemically inert base and active point centers, at which an exothermic reaction of synthesis takes place. The rate of a chemical reaction depends on temperature by the Arrhenius law. The heat is removed from the catalyst granule surface to the synthesis products by heat transfer. Based on the idea of self-consistent field, a closed system of equations is constructed for calculating the temperatures of the active centers. As an example, a catalyst granule of the Fischer-Tropsch synthesis with active metallic cobalt particles is considered. The stationary temperatures of the active centers are calculated by the timedependent technique by solving a system of ordinary differential equations. The temperature distribution inside the granule has been found for the local centers located on one diameter of the granule and distributed randomly in the granule's volume. The existence of the critical temperature inside the reactor has been established, the excess of which leads to substantial superheating of local centers. The temperature distribution with local reaction centers differs qualitatively from the granule temperature calculated in the homogeneous approximation. The results of calculations are given.

  14. Phase transitions in the distribution of the Andreev conductance of superconductor-metal junctions with multiple transverse modes.

    PubMed

    Damle, Kedar; Majumdar, Satya N; Tripathi, Vikram; Vivo, Pierpaolo

    2011-10-21

    We compute analytically the full distribution of Andreev conductance G(NS) of a metal-superconductor interface with a large number N(c) of transverse modes, using a random matrix approach. The probability distribution P(G(NS),N(c) in the limit of large N(c) displays a Gaussian behavior near the average value =(2-√2)N(c) and asymmetric power-law tails in the two limits of very small and very large G(NS). In addition, we find a novel third regime sandwiched between the central Gaussian peak and the power-law tail for large G(NS). Weakly nonanalytic points separate these four regimes-these are shown to be consequences of three phase transitions in an associated Coulomb gas problem. © 2011 American Physical Society

  15. A nonparametric significance test for sampled networks.

    PubMed

    Elliott, Andrew; Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix

    2018-01-01

    Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network. The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  16. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    PubMed Central

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  17. A nonparametric significance test for sampled networks

    PubMed Central

    Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix

    2018-01-01

    Abstract Motivation Our work is motivated by an interest in constructing a protein–protein interaction network that captures key features associated with Parkinson’s disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. Results We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein–protein interaction network. Availability and implementation The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. Contact ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036452

  18. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  19. Continuous representation of tumor microvessel density and detection of angiogenic hotspots in histological whole-slide images.

    PubMed

    Kather, Jakob Nikolas; Marx, Alexander; Reyes-Aldasoro, Constantino Carlos; Schad, Lothar R; Zöllner, Frank Gerrit; Weis, Cleo-Aron

    2015-08-07

    Blood vessels in solid tumors are not randomly distributed, but are clustered in angiogenic hotspots. Tumor microvessel density (MVD) within these hotspots correlates with patient survival and is widely used both in diagnostic routine and in clinical trials. Still, these hotspots are usually subjectively defined. There is no unbiased, continuous and explicit representation of tumor vessel distribution in histological whole slide images. This shortcoming distorts angiogenesis measurements and may account for ambiguous results in the literature. In the present study, we describe and evaluate a new method that eliminates this bias and makes angiogenesis quantification more objective and more efficient. Our approach involves automatic slide scanning, automatic image analysis and spatial statistical analysis. By comparing a continuous MVD function of the actual sample to random point patterns, we introduce an objective criterion for hotspot detection: An angiogenic hotspot is defined as a clustering of blood vessels that is very unlikely to occur randomly. We evaluate the proposed method in N=11 images of human colorectal carcinoma samples and compare the results to a blinded human observer. For the first time, we demonstrate the existence of statistically significant hotspots in tumor images and provide a tool to accurately detect these hotspots.

  20. Ising lattices with +/-J second-nearest-neighbor interactions

    NASA Astrophysics Data System (ADS)

    Ramírez-Pastor, A. J.; Nieto, F.; Vogel, E. E.

    1997-06-01

    Second-nearest-neighbor interactions are added to the usual nearest-neighbor Ising Hamiltonian for square lattices in different ways. The starting point is a square lattice where half the nearest-neighbor interactions are ferromagnetic and the other half of the bonds are antiferromagnetic. Then, second-nearest-neighbor interactions can also be assigned randomly or in a variety of causal manners determined by the nearest-neighbor interactions. In the present paper we consider three causal and three random ways of assigning second-nearest-neighbor exchange interactions. Several ground-state properties are then calculated for each of these lattices:energy per bond ɛg, site correlation parameter pg, maximal magnetization μg, and fraction of unfrustrated bonds hg. A set of 500 samples is considered for each size N (number of spins) and array (way of distributing the N spins). The properties of the original lattices with only nearest-neighbor interactions are already known, which allows realizing the effect of the additional interactions. We also include cubic lattices to discuss the distinction between coordination number and dimensionality. Comparison with results for triangular and honeycomb lattices is done at specific points.

  1. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    PubMed

    Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E

    2017-01-01

    Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.

  2. Stepwise magnetic-geochemical approach for efficient assessment of heavy metal polluted sites

    NASA Astrophysics Data System (ADS)

    Appel, E.; Rösler, W.; Ojha, G.

    2012-04-01

    Previous studies have shown that magnetometry can outline the distribution of fly ash deposition in the surroundings of coal-burning power plants and steel industries. Especially the easy-to-measure magnetic susceptibility (MS) is capable to act as a proxy for heavy metal (HM) pollution caused by such kind of point source pollution. Here we present a demonstration project around the coal-burning power plant complex "Schwarze Pumpe" in eastern Germany. Before reunification of West and East Germany huge amounts of HM pollutants were emitted from the "Schwarze Pumpe" into the environment by both fly ash emission and dumped clinker. The project has been conducted as part of the TASK Centre of Competence which aims at bringing new innovative techniques closer to the market. Our project combines in situ and laboratory MS measurements and HM analyses in order to demonstrate the efficiency of a stepwise approach for site assessment of HM pollution around point sources of fly-ash emission and deposition into soil. The following scenario is played through: We assume that the "true" spatial distribution of HM pollution (given by the pollution load index PLI comprising Fe, Zn, Pb, and Cu) is represented by our entire set of 85 measured samples (XRF analyses) from forest sites around the "Schwarze Pumpe". Surface MS data (collected with a Bartington MS2D) and in situ vertical MS sections (logged by an SM400 instrument) are used to determine a qualitative overview of potentially higher and lower polluted areas. A suite of spatial HM distribution maps obtained by random selections of 30 out of the 85 analysed sites is compared to the HM map obtained from a targeted 30-sites-selection based on pre-information from the MS results. The PLI distribution map obtained from the targeted 30-sites-selection shows all essential details of the "true" pollution map, while the different random 30-sites-selections miss important features. This comparison shows that, for the same cost investment, a stepwise combined magnetic-geochemical site assessment leads to a clearly more significant characterization of soil pollution than by a common approach with exclusively random sampling for geochemical analysis, or alternatively to an equal quality result for lower costs.

  3. Computer Simulation Results for the Two-Point Probability Function of Composite Media

    NASA Astrophysics Data System (ADS)

    Smith, P.; Torquato, S.

    1988-05-01

    Computer simulation results are reported for the two-point matrix probability function S2 of two-phase random media composed of disks distributed with an arbitrary degree of impenetrability λ. The novel technique employed to sample S2( r) (which gives the probability of finding the endpoints of a line segment of length r in the matrix) is very accurate and has a fast execution time. Results for the limiting cases λ = 0 (fully penetrable disks) and λ = 1 (hard disks), respectively, compare very favorably with theoretical predictions made by Torquato and Beasley and by Torquato and Lado. Results are also reported for several values of λ. that lie between these two extremes: cases which heretofore have not been examined.

  4. Habitat classification modeling with incomplete data: Pushing the habitat envelope

    USGS Publications Warehouse

    Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.

    2007-01-01

    Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.

  5. Continuous description of fluctuating eccentricities

    NASA Astrophysics Data System (ADS)

    Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves

    2014-11-01

    We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.

  6. Gravitational Lensing Effect on the Two-Point Correlation of Hot Spots in the Cosmic Microwave Background.

    PubMed

    Takada; Komatsu; Futamase

    2000-04-20

    We investigate the weak gravitational lensing effect that is due to the large-scale structure of the universe on two-point correlations of local maxima (hot spots) in the two-dimensional sky map of the cosmic microwave background (CMB) anisotropy. According to the Gaussian random statistics, as most inflationary scenarios predict, the hot spots are discretely distributed, with some characteristic angular separations on the last scattering surface that are due to oscillations of the CMB angular power spectrum. The weak lensing then causes pairs of hot spots, which are separated with the characteristic scale, to be observed with various separations. We found that the lensing fairly smooths out the oscillatory features of the two-point correlation function of hot spots. This indicates that the hot spot correlations can be a new statistical tool for measuring the shape and normalization of the power spectrum of matter fluctuations from the lensing signatures.

  7. Distribution majorization of corner points by reinforcement learning for moving object detection

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  8. Efficient search of multiple types of targets

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-12-01

    Random searches often take place in fragmented landscapes. Also, in many instances like animal foraging, significant benefits to the searcher arise from visits to a large diversity of patches with a well-balanced distribution of targets found. Up to date, such aspects have been widely ignored in the usual single-objective analysis of search efficiency, in which one seeks to maximize just the number of targets found per distance traversed. Here we address the problem of determining the best strategies for the random search when these multiple-objective factors play a key role in the process. We consider a figure of merit (efficiency function), which properly "scores" the mentioned tasks. By considering random walk searchers with a power-law asymptotic Lévy distribution of step lengths, p (ℓ ) ˜ℓ-μ , with 1 <μ ≤3 , we show that the standard optimal strategy with μopt≈2 no longer holds universally. Instead, optimal searches with enhanced superdiffusivity emerge, including values as low as μopt≈1.3 (i.e., tending to the ballistic limit). For the general theory of random search optimization, our findings emphasize the necessity to correctly characterize the multitude of aims in any concrete metric to compare among possible candidates to efficient strategies. In the context of animal foraging, our results might explain some empirical data pointing to stronger superdiffusion (μ <2 ) in the search behavior of different animal species, conceivably associated to multiple goals to be achieved in fragmented landscapes.

  9. The Incremental Effects of Manual Therapy or Booster Sessions in Addition to Exercise Therapy for Knee Osteoarthritis: A Randomized Clinical Trial.

    PubMed

    Abbott, J Haxby; Chapple, Catherine M; Fitzgerald, G Kelley; Fritz, Julie M; Childs, John D; Harcombe, Helen; Stout, Kirsten

    2015-12-01

    A factorial randomized controlled trial. To investigate the addition of manual therapy to exercise therapy for the reduction of pain and increase of physical function in people with knee osteoarthritis (OA), and whether "booster sessions" compared to consecutive sessions may improve outcomes. The benefits of providing manual therapy in addition to exercise therapy, or of distributing treatment sessions over time using periodic booster sessions, in people with knee OA are not well established. All participants had knee OA and were provided 12 sessions of multimodal exercise therapy supervised by a physical therapist. Participants were randomly allocated to 1 of 4 groups: exercise therapy in consecutive sessions, exercise therapy distributed over a year using booster sessions, exercise therapy plus manual therapy without booster sessions, and exercise therapy plus manual therapy with booster sessions. The primary outcome measure was the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC score; 0-240 scale) at 1-year follow-up. Secondary outcome measures were the numeric pain-rating scale and physical performance tests. Of 75 participants recruited, 66 (88%) were retained at 1-year follow-up. Factorial analysis of covariance of the main effects showed significant benefit from booster sessions (P = .009) and manual therapy (P = .023) over exercise therapy alone. Group analysis showed that exercise therapy with booster sessions (WOMAC score, -46.0 points; 95% confidence interval [CI]: -80.0, -12.0) and exercise therapy plus manual therapy (WOMAC score, -37.5 points; 95% CI: -69.7, -5.5) had superior effects compared with exercise therapy alone. The combined strategy of exercise therapy plus manual therapy with booster sessions was not superior to exercise therapy alone. Distributing 12 sessions of exercise therapy over a year in the form of booster sessions was more effective than providing 12 consecutive exercise therapy sessions. Providing manual therapy in addition to exercise therapy improved treatment effectiveness compared to providing 12 consecutive exercise therapy sessions alone. Trial registered with the Australian New Zealand Clinical Trials Registry (ACTRN12612000460808).

  10. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  11. Random growth lattice filling model of percolation: a crossover from continuous to discontinuous transition

    NASA Astrophysics Data System (ADS)

    Roy, Bappaditya; Santra, S. B.

    2018-05-01

    A random growth lattice filling model of percolation with a touch and stop growth rule is developed and studied numerically on a two dimensional square lattice. Nucleation centers are continuously added one at a time to the empty lattice sites and clusters are grown from these nucleation centers with a growth probability g. For a given g (), the system passes through a critical point during the growth process where the transition from a disconnected to a connected phase occurs. The model is found to exhibit second order continuous percolation transitions as ordinary percolation for whereas for it exhibits weak first order discontinuous percolation transitions. The continuous transitions are characterized by estimating the values of the critical exponents associated with the order parameter fluctuation and the fractal dimension of the spanning cluster over the whole range of g. The discontinuous transitions, however, are characterized by a compact spanning cluster, lattice size independent fluctuation of the order parameter per lattice, departure from power law scaling in the cluster size distribution and weak bimodal distribution of the order parameter. The nature of transitions are further confirmed by studying the Binder cumulant. Instead of a sharp tricritical point, a tricritical region is found to occur for 0.5  <  g  <  0.8 within which the values of the critical exponents change continuously until the crossover from continuous to discontinuous transition is completed.

  12. Application of theoretical models to active and passive remote sensing of saline ice

    NASA Technical Reports Server (NTRS)

    Han, H. C.; Kong, Jin AU; Shin, Robert T.; Nghiem, Son V.; Kwok, R.

    1992-01-01

    The random medium model is used to interpret the polarimetric active and passive measurements of saline ice. The ice layer is described as a host ice medium embedded with randomly distributed inhomogeneities, and the underlying sea water is considered as a homogeneous half-space. The scatterers in the ice layer are modeled with an ellipsoidal correlation function. The orientation of the scatterers is vertically aligned and azimuthally random. The strong permittivity fluctuation theory is employed to calculate the effective permittivity and the distorted Born approximation is used to obtain the polarimetric scattering coefficients. We also calculate the thermal emissions based on the reciprocity and energy conservation principles. The effects of the random roughness at the air-ice, and ice-water interfaces are accounted for by adding the surface scattering to the volume scattering return incoherently. The above theoretical model, which has been successfully applied to analyze the radar backscatter data of the first-year sea ice near Point Barrow, AK, is used to interpret the measurements performed in the CRRELEX program.

  13. Scale-invariant puddles in graphene: Geometric properties of electron-hole distribution at the Dirac point.

    PubMed

    Najafi, M N; Nezhadhaghighi, M Ghasemi

    2017-03-01

    We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.

  14. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  15. Dynamics of Nearest-Neighbour Competitions on Graphs

    NASA Astrophysics Data System (ADS)

    Rador, Tonguç

    2017-10-01

    Considering a collection of agents representing the vertices of a graph endowed with integer points, we study the asymptotic dynamics of the rate of the increase of their points according to a very simple rule: we randomly pick an an edge from the graph which unambiguously defines two agents we give a point the the agent with larger point with probability p and to the lagger with probability q such that p+q=1. The model we present is the most general version of the nearest-neighbour competition model introduced by Ben-Naim, Vazquez and Redner. We show that the model combines aspects of hyperbolic partial differential equations—as that of a conservation law—graph colouring and hyperplane arrangements. We discuss the properties of the model for general graphs but we confine in depth study to d-dimensional tori. We present a detailed study for the ring graph, which includes a chemical potential approximation to calculate all its statistics that gives rather accurate results. The two-dimensional torus, not studied in depth as the ring, is shown to possess critical behaviour in that the asymptotic speeds arrange themselves in two-coloured islands separated by borders of three other colours and the size of the islands obey power law distribution. We also show that in the large d limit the d-dimensional torus shows inverse sine law for the distribution of asymptotic speeds.

  16. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  17. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  18. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Meiyun; Long, Shibing, E-mail: longshibing@ime.ac.cn; Wang, Guoming

    2014-11-10

    The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electronmore » transport model. Our work provides indications for the improvement of the switching uniformity.« less

  20. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  1. Simulation using computer-piloted point excitations of vibrations induced on a structure by an acoustic environment

    NASA Astrophysics Data System (ADS)

    Monteil, P.

    1981-11-01

    Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.

  2. Erratum: ``CO Line Width Differences in Early Universe Molecular Emission-Line Galaxies: Submillimeter Galaxies versus QSO Hosts'' (AJ, 131, 2763 [2006])

    NASA Astrophysics Data System (ADS)

    Carilli, C. L.; Wang, Ran

    2006-11-01

    It has been pointed out to us that in three dimensions the mean angle of randomly oriented disks with respect to the sky plane is <θ>=30deg, and not the 45° assumed in the original paper. This lower angle for the (assumed) random distribution of submillimeter galaxies, coupled with the factor of 2.3 lower mean CO line width for high-z, far-IR-luminous QSO host galaxies relative to the submillimeter galaxies, implies a mean angle with respect to the sky plane for the QSO host galaxies of <θ>QSO=13deg, as opposed to the 18° quoted in the original paper. We thank Pat Hall for bringing this to our attention.

  3. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  4. Epidemiological characteristics of cases of death from tuberculosis and vulnerable territories1

    PubMed Central

    Yamamura, Mellina; Santos-Neto, Marcelino; dos Santos, Rebeca Augusto Neman; Garcia, Maria Concebida da Cunha; Nogueira, Jordana de Almeida; Arcêncio, Ricardo Alexandre

    2015-01-01

    Objective: to characterize the differences in the clinical and epidemiological profile of cases of death that had tuberculosis as an immediate or associated cause, and to analyze the spatial distribution of the cases of death from tuberculosis within the territories of Ribeirão Preto, Brazil. Method: an ecological study, in which the population consisted of 114 cases of death from tuberculosis. Bivariate analysis was carried out, as well as point density analysis, defined with the Kernel estimate. Results: of the cases of death from tuberculosis, 50 were the immediate cause and 64 an associated cause. Age (p=.008) and sector responsible for the death certificate (p=.003) were the variables that presented statistically significant associations with the cause of death. The spatial distribution, in both events, did not occur randomly, forming clusters in areas of the municipality. Conclusion: the difference in the profiles of the cases of death from tuberculosis, as a basic cause and as an associated cause, was governed by the age and the sector responsible for the completion of the death certificate. The non-randomness of the spatial distribution of the cases suggests areas that are vulnerable to these events. Knowing these areas can contribute to the choice of disease control strategies. PMID:26487142

  5. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  6. On joint subtree distributions under two evolutionary models.

    PubMed

    Wu, Taoyang; Choi, Kwok Pui

    2016-04-01

    In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Quantifying polypeptide conformational space: sensitivity to conformation and ensemble definition.

    PubMed

    Sullivan, David C; Lim, Carmay

    2006-08-24

    Quantifying the density of conformations over phase space (the conformational distribution) is needed to model important macromolecular processes such as protein folding. In this work, we quantify the conformational distribution for a simple polypeptide (N-mer polyalanine) using the cumulative distribution function (CDF), which gives the probability that two randomly selected conformations are separated by less than a "conformational" distance and whose inverse gives conformation counts as a function of conformational radius. An important finding is that the conformation counts obtained by the CDF inverse depend critically on the assignment of a conformation's distance span and the ensemble (e.g., unfolded state model): varying ensemble and conformation definition (1 --> 2 A) varies the CDF-based conformation counts for Ala(50) from 10(11) to 10(69). In particular, relatively short molecular dynamics (MD) relaxation of Ala(50)'s random-walk ensemble reduces the number of conformers from 10(55) to 10(14) (using a 1 A root-mean-square-deviation radius conformation definition) pointing to potential disconnections in comparing the results from simplified models of unfolded proteins with those from all-atom MD simulations. Explicit waters are found to roughen the landscape considerably. Under some common conformation definitions, the results herein provide (i) an upper limit to the number of accessible conformations that compose unfolded states of proteins, (ii) the optimal clustering radius/conformation radius for counting conformations for a given energy and solvent model, (iii) a means of comparing various studies, and (iv) an assessment of the applicability of random search in protein folding.

  8. Simulation of Crack Propagation in Engine Rotating Components under Variable Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, P. J.; Ghosn, L. J.; Telesman, J.; Calomino, A. M.; Kantzos, P.

    1998-01-01

    The crack propagation life of tested specimens has been repeatedly shown to strongly depend on the loading history. Overloads and extended stress holds at temperature can either retard or accelerate the crack growth rate. Therefore, to accurately predict the crack propagation life of an actual component, it is essential to approximate the true loading history. In military rotorcraft engine applications, the loading profile (stress amplitudes, temperature, and number of excursions) can vary significantly depending on the type of mission flown. To accurately assess the durability of a fleet of engines, the crack propagation life distribution of a specific component should account for the variability in the missions performed (proportion of missions flown and sequence). In this report, analytical and experimental studies are described that calibrate/validate the crack propagation prediction capability ]or a disk alloy under variable amplitude loading. A crack closure based model was adopted to analytically predict the load interaction effects. Furthermore, a methodology has been developed to realistically simulate the actual mission mix loading on a fleet of engines over their lifetime. A sequence of missions is randomly selected and the number of repeats of each mission in the sequence is determined assuming a Poisson distributed random variable with a given mean occurrence rate. Multiple realizations of random mission histories are generated in this manner and are used to produce stress, temperature, and time points for fracture mechanics calculations. The result is a cumulative distribution of crack propagation lives for a given, life limiting, component location. This information can be used to determine a safe retirement life or inspection interval for the given location.

  9. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  10. Dimensional study of the dynamical arrest in a random Lorentz gas.

    PubMed

    Jin, Yuliang; Charbonneau, Patrick

    2015-04-01

    The random Lorentz gas (RLG) is a minimal model for transport in heterogeneous media. Upon increasing the obstacle density, it exhibits a growing subdiffusive transport regime and then a dynamical arrest. Here, we study the dimensional dependence of the dynamical arrest, which can be mapped onto the void percolation transition for Poisson-distributed point obstacles. We numerically determine the arrest in dimensions d=2-6. Comparison of the results with standard mode-coupling theory reveals that the dynamical theory prediction grows increasingly worse with d. In an effort to clarify the origin of this discrepancy, we relate the dynamical arrest in the RLG to the dynamic glass transition of the infinite-range Mari-Kurchan-model glass former. Through a mixed static and dynamical analysis, we then extract an improved dimensional scaling form as well as a geometrical upper bound for the arrest. The results suggest that understanding the asymptotic behavior of the random Lorentz gas may be key to surmounting fundamental difficulties with the mode-coupling theory of glasses.

  11. Exact extreme-value statistics at mixed-order transitions.

    PubMed

    Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David

    2016-05-01

    We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.

  12. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  13. The angular distribution of diffusely backscattered light

    NASA Astrophysics Data System (ADS)

    Vera, M. U.; Durian, D. J.

    1997-03-01

    The diffusion approximation predicts the angular distribution of light diffusely transmitted through an opaque slab to depend only on boundary reflectivity, independent of scattering anisotropy, and this has been verified by experiment(M.U. Vera and D.J. Durian, Phys. Rev. E 53) 3215 (1996). Here, by contrast, we demonstrate that the angular distribution of diffusely backscattered light depends on scattering anisotropy as well as boundary reflectivity. To model this observation scattering anisotropy is added to the diffusion approximation by a discontinuity in the photon concentration at the source point that is proportional to the average cosine of the scattering angle. We compare the resulting predictions with random walk simulations and with measurements of diffusely backscattered intensity versus angle for glass frits and aqueous suspensions of polystyrene spheres held in air or immersed in a water bath. Increasing anisotropy and boundary reflectivity each tend to flatten the predicted distributions, and for different combinations of anisotropy and reflectivity the agreement between data and predictions ranges from qualitatively to quantitatively good.

  14. High-precision simulation of the height distribution for the KPZ equation

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Le Doussal, Pierre; Majumdar, Satya N.; Rosso, Alberto; Schehr, Gregory

    2018-03-01

    The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as 10-1000 in the tails. Both short and long times are investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations. At short times the agreement with the analytical expression is spectacular. We observe that the far left and right tails, with exponents 5/2 and 3/2, respectively, are preserved also in the region of long times. We present some evidence for the predicted non-trivial crossover in the left tail from the 5/2 tail exponent to the cubic tail of the Tracy-Widom distribution, although the details of the full scaling form remain beyond reach.

  15. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  16. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  17. Study on the Distribution of Geological Hazards Based on Fractal Characteristics - a Case Study of Dachuan District

    NASA Astrophysics Data System (ADS)

    Wang, X.; Liu, H.; Yao, K.; Wei, Y.

    2018-04-01

    It is a complicated process to analyze the cause of geological hazard. Through the analysis function of GIS software, 250 landslides were randomly selected from 395 landslide hazards in the study area, superimposed with the types of landforms, annual rainfall and vegetation coverage respectively. It used box dimension method of fractal dimension theory to study the fractal characteristics of spatial distribution of landslide disasters in Dachuan district, and analyse the statistical results. Research findings showed that the The fractal dimension of the landslides in the Dachuan area is 0.9114, the correlation coefficient is 0.9627, and it has high autocorrelation. Zoning statistics according to various natural factors, the fractal dimension between landslide hazard points and deep hill, middle hill area is strong as well as the area whose average annual rainfall is 1050 mm-1250 mm and vegetation coverage is 30 %-60 %. Superposition of the potential hazard distribution map of single influence factors to get the potential hazard zoning of landslides in the area. Verifying the potential hazard zoning map of the potential landslides with 145 remaining disaster points, among them, there are 74 landslide hazard points in high risk area, accounting for 51.03 % of the total. There are 59 landslides in the middle risk area, accounting for 40.69 % of the total, and 12 in the low risk area, accounting for 8.28 % of the total. The matching degree of the verifying result and the potential hazard zoning is high. Therefore, the fractal dimension value divided the degree of geological disaster susceptibility can be described the influence degree of each influence factor to geological disaster point more intuitively, it also can divide potential disaster risk areas and provide visual data support for effective management of geological disasters.

  18. Entanglement entropy at infinite-randomness fixed points in higher dimensions.

    PubMed

    Lin, Yu-Cheng; Iglói, Ferenc; Rieger, Heiko

    2007-10-05

    The entanglement entropy of the two-dimensional random transverse Ising model is studied with a numerical implementation of the strong-disorder renormalization group. The asymptotic behavior of the entropy per surface area diverges at, and only at, the quantum phase transition that is governed by an infinite-randomness fixed point. Here we identify a double-logarithmic multiplicative correction to the area law for the entanglement entropy. This contrasts with the pure area law valid at the infinite-randomness fixed point in the diluted transverse Ising model in higher dimensions.

  19. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  20. Mathematics of Failures in Complex Systems: Characterization and Mitigation of Service Failures in Complex Dynamic Systems

    DTIC Science & Technology

    2007-06-30

    fractal dimensions and Lyapunov exponents . Fractal dimensions characterize geometri- cal complexity of dynamics (e.g., spatial distribution of points along...ant classi3ers (e.g., Lyapunov exponents , and fractal dimensions). The 3rst three steps show how chaotic systems may be separated from stochastic...correlated random walk in which a ¼ 2H, where H is the Hurst exponen interval 0pHp1 with the case H ¼ 0:5 corresponding to a simple rando This model has been

  1. Probability in High Dimension

    DTIC Science & Technology

    2014-06-30

    b 1 , . . . , b0m, bm)  fm(b0) + Pm i=1 1bi 6=b0 i 1b i 6=b j for j<i. 4.8 ( Travelling salesman problem ). Let X 1 , . . . ,Xn be i.i.d. points that...are uniformly distributed in the unit square [0, 1]2. We think of Xi as the location of city i. The goal of the travelling salesman problem is to find... salesman problem , . . . • Probability in Banach spaces: probabilistic limit theorems for Banach- valued random variables, empirical processes, local

  2. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  3. Effect of Electroacupuncture at The Zusanli Point (Stomach-36) on Dorsal Random Pattern Skin Flap Survival in a Rat Model.

    PubMed

    Wang, Li-Ren; Cai, Le-Yi; Lin, Ding-Sheng; Cao, Bin; Li, Zhi-Jie

    2017-10-01

    Random skin flaps are commonly used for wound repair and reconstruction. Electroacupuncture at The Zusanli point could enhance microcirculation and blood perfusion in random skin flaps. To determine whether electroacupuncture at The Zusanli point can improve the survival of random skin flaps in a rat model. Thirty-six male Sprague Dawley rats were randomly divided into 3 groups: control group (no electroacupuncture), Group A (electroacupuncture at a nonacupoint near The Zusanli point), and Group B (electroacupuncture at The Zusanli point). McFarlane flaps were established. On postoperative Day 2, malondialdehyde (MDA) and superoxide dismutase were detected. The flap survival rate was evaluated, inflammation was examined in hematoxylin and eosin-stained slices, and the expression of vascular endothelial growth factor (VEGF) was measured immunohistochemically on Day 7. The mean survival area of the flaps in Group B was significantly larger than that in the control group and Group A. Superoxide dismutase activity and VEGF expression level were significantly higher in Group B than those in the control group and Group A, whereas MDA and inflammation levels in Group B were significantly lower than those in the other 2 groups. Electroacupuncture at The Zusanli point can effectively improve the random flap survival.

  4. Nest placement of the giant Amazon river turtle, Podocnemis expansa, in the Araguaia River, Goiás State, Brazil.

    PubMed

    Ferreira, Paulo Dias Júnior; Castro, Paulo de Tarso Amorim

    2005-05-01

    The giant Amazon river turtle (Podocnemis expansa) nests on extensive sand bars on the margins and interior of the channel during the dry season. The high concentration of nests in specific points of certain beaches indicates that the selection of nest placement is not random but is related to some geological aspects, such as bar margin inclination and presence of a high, sandy platform. The presence of access channels to high platform points or ramp morphology are decisive factors in the choice of nesting areas. The eroded and escarped margins of the beaches hinder the Amazon river turtle arriving at the most suitable places for nesting. Through the years, changes in beach morphology can alter nest distribution.

  5. Classical and quantum stability in putative landscapes

    DOE PAGES

    Dine, Michael

    2017-01-18

    Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less

  6. Classical and quantum stability in putative landscapes

    NASA Astrophysics Data System (ADS)

    Dine, Michael

    2017-01-01

    Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.

  7. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  8. Stochastic modelling for biodosimetry: Predicting the chromosomal response to radiation at different time points after exposure

    NASA Astrophysics Data System (ADS)

    Deperas-Standylo, Joanna; Gudowska-Nowak, Ewa; Ritter, Sylvia

    2014-07-01

    Cytogenetic data accumulated from the experiments with peripheral blood lymphocytes exposed to densely ionizing radiation clearly demonstrate that for particles with linear energy transfer (LET) >100 keV/ μm the derived relative biological effectiveness (RBE) will strongly depend on the time point chosen for the analysis. A reasonable prediction of radiation-induced chromosome damage and its distribution among cells can be achieved by exploiting Monte Carlo methodology along with the information about the radius of the penetrating ion-track and the LET of the ion beam. In order to examine the relationship between the track structure and the distribution of aberrations induced in human lymphocytes and to clarify the correlation between delays in the cell cycle progression and the aberration burden visible at the first post-irradiation mitosis, we have analyzed chromosome aberrations in lymphocytes exposed to Fe-ions with LET values of 335 keV/ μm and formulated a Monte Carlo model which reflects time-delay in mitosis of aberrant cells. Within the model the frequency distributions of aberrations among cells follow the pattern of local energy distribution and are well approximated by a time-dependent compound Poisson statistics. The cell-division cycle of undamaged and aberrant cells and chromosome aberrations are modelled as a renewal process represented by a random sum of (independent and identically distributed) random elements S N = ∑ N i=0 X i . Here N stands for the number of particle traversals of cell nucleus, each leading to a statistically independent formation of X i aberrations. The parameter N is itself a random variable and reflects the cell cycle delay of heavily damaged cells. The probability distribution of S N follows a general law for which the moment generating function satisfies the relation Φ S N = Φ N ( Φ X i ). Formulation of the Monte Carlo model which allows to predict expected fluxes of aberrant and non-aberrant cells has been based on several input information: (i) experimentally measured mitotic index in the population of irradiated cells; (ii) scored fraction of cells in first cell cycle; (iii) estimated average number of particle traversals per cell nucleus. By reconstructing the local dose distribution in the biological target, the relevant amount of lesions induced by ions is estimated from the biological effect induced by photons at the same dose level. Moreover, the total amount of aberrations induced within the entire population has been determined. For each subgroup of intact (non-hit) and aberrant cells the cell-division cycle has been analyzed reproducing correctly an expected correlation between mitotic delay and the number of aberrations carried by a cell. This observation is of particular importance for the proper estimation of the biological efficiency of ions and for the estimation of health risks associated with radiation exposure.

  9. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  10. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  11. [Spatial point pattern analysis of main trees and flowering Fargesia qinlingensis in Abies fargesii forests in Mt Taibai of the Qinling Mountains, China].

    PubMed

    Li, Guo Chun; Song, Hua Dong; Li, Qi; Bu, Shu Hai

    2017-11-01

    In Abies fargesii forests of the giant panda's habitats in Mt. Taibai, the spatial distribution patterns and interspecific associations of main tree species and their spatial associations with the understory flowering Fargesia qinlingensis were analyzed at multiple scales by univariate and bivaria-te O-ring function in point pattern analysis. The results showed that in the A. fargesii forest, the number of A. fargesii was largest but its population structure was in decline. The population of Betula platyphylla was relatively young, with a stable population structure, while the population of B. albo-sinensis declined. The three populations showed aggregated distributions at small scales and gradually showed random distributions with increasing spatial scales. Spatial associations among tree species were mainly showed at small scales and gradually became not spatially associated with increasing scale. A. fargesii and B. platyphylla were positively associated with flowering F. qinlingensis at large and medium scales, whereas B. albo-sinensis showed negatively associated with flowering F. qinlingensis at large and medium scales. The interaction between trees and F. qinlingensis in the habitats of giant panda promoted the dynamic succession and development of forests, which changed the environment of giant panda's habitats in Qinling.

  12. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  13. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  14. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  15. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  16. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  17. Evaluating the Use of Random Distribution Theory to Introduce Statistical Inference Concepts to Business Students

    ERIC Educational Resources Information Center

    Larwin, Karen H.; Larwin, David A.

    2011-01-01

    Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…

  18. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  19. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  20. Transforming graphene nanoribbons into nanotubes by use of point defects.

    PubMed

    Sgouros, A; Sigalas, M M; Papagelis, K; Kalosakas, G

    2014-03-26

    Using molecular dynamics simulations with semi-empirical potentials, we demonstrate a method to fabricate carbon nanotubes (CNTs) from graphene nanoribbons (GNRs), by periodically inserting appropriate structural defects into the GNR crystal structure. We have found that various defect types initiate the bending of GNRs and eventually lead to the formation of CNTs. All kinds of carbon nanotubes (armchair, zigzag, chiral) can be produced with this method. The structural characteristics of the resulting CNTs, and the dependence on the different type and distribution of the defects, were examined. The smallest (largest) CNT obtained had a diameter of ∼ 5 Å (∼ 39 Å). Proper manipulation of ribbon edges controls the chirality of the CNTs formed. Finally, the effect of randomly distributed defects on the ability of GNRs to transform into CNTs is considered.

  1. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  2. Immediate effects of Tuina techniques on working-related musculoskeletal disorder of professional orchestra musicians.

    PubMed

    Sousa, Cláudia Maria; Moreira, Luis; Coimbra, Daniela; Machado, Jorge; Greten, Henry J

    2015-07-01

    Musicians are a prone group to suffer from working-related musculoskeletal disorder (WRMD). Conventional solutions to control musculoskeletal pain include pharmacological treatment and rehabilitation programs but their efficiency is sometimes disappointing. The aim of this research is to study the immediate effects of Tuina techniques on WRMD of professional orchestra musicians from the north of Portugal. We performed a prospective, controlled, single-blinded, randomized study. Professional orchestra musicians with a diagnosis of WRMD were randomly distributed into the experimental group (n=39) and the control group (n=30). During an individual interview, Chinese diagnosis took place and treatment points were chosen. Real acupoints were treated by Tuina techniques into the experimental group and non-specific skin points were treated into the control group. Pain was measured by verbal numerical scale before and immediately after intervention. After one treatment session, pain was reduced in 91.8% of the cases for the experimental group and 7.9% for the control group. Although results showed that Tuina techniques are effectively reducing WRMD in professional orchestra musicians of the north of Portugal, further investigations with stronger measurements, double-blinding designs and bigger simple sizes are needed.

  3. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  4. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  5. Branching random walk with step size coming from a power law

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil

    2015-09-01

    In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.

  6. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  7. On the apparent insignificance of the randomness of flexible joints on large space truss dynamics

    NASA Technical Reports Server (NTRS)

    Koch, R. M.; Klosner, J. M.

    1993-01-01

    Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.

  8. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    NASA Astrophysics Data System (ADS)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.

  9. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    NASA Astrophysics Data System (ADS)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  10. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  11. Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network

    DTIC Science & Technology

    2013-05-26

    public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University

  12. Optimal marker placement in hadrontherapy: intelligent optimization strategies with augmented Lagrangian pattern search.

    PubMed

    Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido

    2015-02-01

    In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Evolutionary advantage via common action of recombination and neutrality

    NASA Astrophysics Data System (ADS)

    Saakian, David B.; Hu, Chin-Kun

    2013-11-01

    We investigate evolution models with recombination and neutrality. We consider the Crow-Kimura (parallel) mutation-selection model with the neutral fitness landscape, in which there is a central peak with high fitness A, and some of 1-point mutants have the same high fitness A, while the fitness of other sequences is 0. We find that the effect of recombination and neutrality depends on the concrete version of both neutrality and recombination. We consider three versions of neutrality: (a) all the nearest neighbor sequences of the peak sequence have the same high fitness A; (b) all the l-point mutations in a piece of genome of length l≥1 are neutral; (c) the neutral sequences are randomly distributed among the nearest neighbors of the peak sequences. We also consider three versions of recombination: (I) the simple horizontal gene transfer (HGT) of one nucleotide; (II) the exchange of a piece of genome of length l, HGT-l; (III) two-point crossover recombination (2CR). For the case of (a), the 2CR gives a rather strong contribution to the mean fitness, much stronger than that of HGT for a large genome length L. For the random distribution of neutral sequences there is a critical degree of neutrality νc, and for μ<μc and (μc-μ) is not large, the 2CR suppresses the mean fitness while HGT increases it; for ν much larger than νc, the 2CR and HGT-l increase the mean fitness larger than that of the HGT. We also consider the recombination in the case of smooth fitness landscapes. The recombination gives some advantage in the evolutionary dynamics, where recombination distinguishes clearly the mean-field-like evolutionary factors from the fluctuation-like ones. By contrast, mutations affect the mean-field-like and fluctuation-like factors similarly. Consequently, recombination can accelerate the non-mean-field (fluctuation) type dynamics without considerably affecting the mean-field-like factors.

  14. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  15. Nanoscale diffusive memristor crossbars as physical unclonable functions.

    PubMed

    Zhang, R; Jiang, H; Wang, Z R; Lin, P; Zhuo, Y; Holcomb, D; Zhang, D H; Yang, J J; Xia, Q

    2018-02-08

    Physical unclonable functions have emerged as promising hardware security primitives for device authentication and key generation in the era of the Internet of Things. Herein, we report novel physical unclonable functions built upon the crossbars of nanoscale diffusive memristors that translate the stochastic distribution of Ag clusters in a SiO 2 matrix into a random binary bitmap that serves as a device fingerprint. The random dispersion of Ag led to an uneven number of clusters at each cross-point, which in turn resulted in a stochastic ability to switch in the Ag:SiO 2 diffusive memristors in an array. The randomness of the dispersion was a barrier to fingerprint cloning and the unique fingerprints of each device were persistent after fabrication. Using an optimized fabrication procedure, we maximized the randomness and achieved an inter-class Hamming distance of 50.68%. We also discovered that the bits were not flipping after over 10 4 s at 400 K, suggesting superior reliability of our physical unclonable functions. In addition, our diffusive memristor-based physical unclonable functions were easy to fabricate and did not require complicated post-processing for digitization and thus, provide new opportunities in hardware security applications.

  16. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  17. Concise biomarker for spatial-temporal change in three-dimensional ultrasound measurement of carotid vessel wall and plaque thickness based on a graph-based random walk framework: Towards sensitive evaluation of response to therapy.

    PubMed

    Chiu, Bernard; Chen, Weifu; Cheng, Jieyu

    2016-12-01

    Rapid progression in total plaque area and volume measured from ultrasound images has been shown to be associated with an elevated risk of cardiovascular events. Since atherosclerosis is focal and predominantly occurring at the bifurcation, biomarkers that are able to quantify the spatial distribution of vessel-wall-plus-plaque thickness (VWT) change may allow for more sensitive detection of treatment effect. The goal of this paper is to develop simple and sensitive biomarkers to quantify the responsiveness to therapies based on the spatial distribution of VWT-Change on the entire 2D carotid standardized map previously described. Point-wise VWT-Changes computed for each patient were reordered lexicographically to a high-dimensional data node in a graph. A graph-based random walk framework was applied with the novel Weighted Cosine (WCos) similarity function introduced, which was tailored for quantification of responsiveness to therapy. The converging probability of each data node to the VWT regression template in the random walk process served as a scalar descriptor for VWT responsiveness to treatment. The WCos-based biomarker was 14 times more sensitive than the mean VWT-Change in discriminating responsive and unresponsive subjects based on the p-values obtained in T-tests. The proposed framework was extended to quantify where VWT-Change occurred by including multiple VWT-Change distribution templates representing focal changes at different regions. Experimental results show that the framework was effective in classifying carotid arteries with focal VWT-Change at different locations and may facilitate future investigations to correlate risk of cardiovascular events with the location where focal VWT-Change occurs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. [Prevalence and spatial distribution of trachoma among schoolchildren in Botucatu, São Paulo - Brazil].

    PubMed

    Schellini, Silvana Artioli; Lavezzo, Marcelo Mendes; Ferraz, Lucieni Barbarini; Olbrich Neto, Jaime; Medina, Norma Hellen; Padovani, Carlos Roberto

    2010-01-01

    To assess the prevalence of trachoma in schoolchildren of Botucatu/ SP-Brazil and its spatial distribution. Cross-sectional study in children aged from 7 to 14 years, who attended elementary schools in Botucatu/SP in November/2005. The sample size was estimated in 2,092 children, considering the 11.2% historic prevalence of trachoma, accepting an estimation error of 10% and confidence level of 95%. The sample was random, weighted and increased by 20%, because of the possible occurrence of losses. The total number of children examined was 2,692. The diagnosis was clinical, based on WHO guidelines. For the evaluation of spatial data, the CartaLinx program (v1.2) was used, and the school demand sectors digitized according to the planning divisions of the Department of Education. The data were statistically analyzed, and the analysis of the spatial structure of events calculated using the Geode program. The prevalence of trachoma in schoolchildren of Botucatu was 2.9% and there were cases of follicular trachoma. The exploratory spatial analysis failed to reject the null hypothesis of randomness (R= -0.45, p>0.05), with no significant demand sectors. The analysis for the Thiessen polygons also showed that the overall pattern was random (I= -0.07, p=0.49). However, local indicators pointed to a group of low-low type for a polygon to the north of the urban area. The prevalence of trachoma in schoolchildren in Botucatu was 2.9%. The analysis of the spatial distribution did not reveal areas of greater clustering of cases. Although the overall pattern of the disease does not reproduce the socio-economic conditions of the population, the lower prevalence of trachoma was found in areas of lower social vulnerability.

  19. Radiative transfer theory for a random distribution of low velocity spheres as resonant isotropic scatterers

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Hayakawa, Toshihiko

    2014-10-01

    Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.

  20. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  2. Brittle-to-ductile transition in a fiber bundle with strong heterogeneity.

    PubMed

    Kovács, Kornél; Hidalgo, Raul Cruz; Pagonabarraga, Ignacio; Kun, Ferenc

    2013-04-01

    We analyze the failure process of a two-component system with widely different fracture strength in the framework of a fiber bundle model with localized load sharing. A fraction 0≤α≤1 of the bundle is strong and it is represented by unbreakable fibers, while fibers of the weak component have randomly distributed failure strength. Computer simulations revealed that there exists a critical composition α(c) which separates two qualitatively different behaviors: Below the critical point, the failure of the bundle is brittle, characterized by an abrupt damage growth within the breakable part of the system. Above α(c), however, the macroscopic response becomes ductile, providing stability during the entire breaking process. The transition occurs at an astonishingly low fraction of strong fibers which can have importance for applications. We show that in the ductile phase, the size distribution of breaking bursts has a power law functional form with an exponent μ=2 followed by an exponential cutoff. In the brittle phase, the power law also prevails but with a higher exponent μ=9/2. The transition between the two phases shows analogies to continuous phase transitions. Analyzing the microstructure of the damage, it was found that at the beginning of the fracture process cracks nucleate randomly, while later on growth and coalescence of cracks dominate, which give rise to power law distributed crack sizes.

  3. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  4. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  5. Non-Random Spatial Distribution of Impacts in the Stardust Cometary Collector

    NASA Technical Reports Server (NTRS)

    Westphal, Andrew J.; Bastien, Ronald K.; Borg, Janet; Bridges, John; Brownlee, Donald E.; Burchell, Mark J.; Cheng, Andrew F.; Clark, Benton C.; Djouadi, Zahia; Floss, Christine

    2007-01-01

    In January 2004, the Stardust spacecraft flew through the coma of comet P81/Wild2 at a relative speed of 6.1 km/sec. Cometary dust was collected at in a 0.1 sq m collector consisting of aerogel tiles and aluminum foils. Two years later, the samples successfully returned to earth and were recovered. We report the discovery that impacts in the Stardust cometary collector are not distributed randomly in the collecting media, but appear to be clustered on scales smaller than approx.10 cm. We also report the discovery of at least two populations of oblique tracks. We evaluated several hypotheses that could explain the observations. No hypothesis was consistent with all the observations, but the preponderance of evidence points toward at least one impact on the central Whipple shield of the spacecraft as the origin of both clustering and low-angle oblique tracks. High-angle oblique tracks unambiguously originate from a noncometary impact on the spacecraft bus just forward of the collector. Here we summarize the observations, and review the evidence for and against three scenarios that we have considered for explaining the impact clustering found on the Stardust aerogel and foil collectors.

  6. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  7. Leonid Storm Flux Analysis From One Leonid MAC Video AL50R

    NASA Technical Reports Server (NTRS)

    Gural, Peter S.; Jenniskens, Peter; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    A detailed meteor flux analysis is presented of a seventeen-minute portion of one videotape, collected on November 18, 1999, during the Leonid Multi-instrument Aircraft Campaign. The data was recorded around the peak of the Leonid meteor storm using an intensified CCD camera pointed towards the low southern horizon. Positions of meteors on the sky were measured. These measured meteor distributions were compared to a Monte Carlo simulation, which is a new approach to parameter estimation for mass ratio and flux. Comparison of simulated flux versus observed flux levels, seen between 1:50:00 and 2:06:41 UT, indicate a magnitude population index of r = 1.8 +/- 0.1 and mass ratio of s = 1.64 +/- 0.06. The average spatial density of the material contributing to the Leonid storm peak is measured at 0.82 +/- 0.19 particles per square kilometer per hour for particles of at least absolute visual magnitude +6.5. Clustering analysis of the arrival times of Leonids impacting the earth's atmosphere over the total observing interval shows no enhancement or clumping down to time scales of the video frame rate. This indicates a uniformly random temporal distribution of particles in the stream encountered during the 1999 epoch. Based on the observed distribution of meteors on the sky and the model distribution, recommendations am made for the optimal pointing directions for video camera meteor counts during future ground and airborne missions.

  8. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage errormore » of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.« less

  9. Calculation of Dose Deposition in 3D Voxels by Heavy Ions and Simulation of gamma-H2AX Experiments

    NASA Technical Reports Server (NTRS)

    Plante, I.; Ponomarev, A. L.; Wang, M.; Cucinotta, F. A.

    2011-01-01

    The biological response to high-LET radiation is different from low-LET radiation due to several factors, notably difference in energy deposition and formation of radiolytic species. Of particular importance in radiobiology is the formation of double-strand breaks (DSB), which can be detected by -H2AX foci experiments. These experiments has revealed important differences in the spatial distribution of DSB induced by low- and high-LET radiations [1,2]. To simulate -H2AX experiments, models based on amorphous track with radial dose are often combined with random walk chromosome models [3,4]. In this work, a new approach using the Monte-Carlo track structure code RITRACKS [5] and chromosome models have been used to simulate DSB formation. At first, RITRACKS have been used to simulate the irradiation of a cubic volume of 5 m by 1) 450 1H+ ions of 300 MeV (LET 0.3 keV/ m) and 2) by 1 56Fe26+ ion of 1 GeV/amu (LET 150 keV/ m). All energy deposition events are recorded to calculate dose in voxels of 20 m. The dose voxels are distributed randomly and scattered uniformly within the volume irradiated by low-LET radiation. Many differences are found in the spatial distribution of dose voxels for the 56Fe26+ ion. The track structure can be distinguished, and voxels with very high dose are found in the region corresponding to the track "core". These high-dose voxels are not found in the low-LET irradiation simulation and indicate clustered energy deposition, which may be responsible for complex DSB. In the second step, assuming that DSB will be found only in voxels where energy is deposited by the radiation, the intersection points between voxels with dose > 0 and simulated chromosomes were obtained. The spatial distribution of the intersection points is similar to -H2AX foci experiments. These preliminary results suggest that combining stochastic track structure and chromosome models could be a good approach to understand radiation-induced DSB and chromosome aberrations.

  10. Restricted mean survival time: an alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome

    PubMed Central

    2013-01-01

    Background Designs and analyses of clinical trials with a time-to-event outcome almost invariably rely on the hazard ratio to estimate the treatment effect and implicitly, therefore, on the proportional hazards assumption. However, the results of some recent trials indicate that there is no guarantee that the assumption will hold. Here, we describe the use of the restricted mean survival time as a possible alternative tool in the design and analysis of these trials. Methods The restricted mean is a measure of average survival from time 0 to a specified time point, and may be estimated as the area under the survival curve up to that point. We consider the design of such trials according to a wide range of possible survival distributions in the control and research arm(s). The distributions are conveniently defined as piecewise exponential distributions and can be specified through piecewise constant hazards and time-fixed or time-dependent hazard ratios. Such designs can embody proportional or non-proportional hazards of the treatment effect. Results We demonstrate the use of restricted mean survival time and a test of the difference in restricted means as an alternative measure of treatment effect. We support the approach through the results of simulation studies and in real examples from several cancer trials. We illustrate the required sample size under proportional and non-proportional hazards, also the significance level and power of the proposed test. Values are compared with those from the standard approach which utilizes the logrank test. Conclusions We conclude that the hazard ratio cannot be recommended as a general measure of the treatment effect in a randomized controlled trial, nor is it always appropriate when designing a trial. Restricted mean survival time may provide a practical way forward and deserves greater attention. PMID:24314264

  11. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  12. Randomized controlled trial of mailed Nicotine Replacement Therapy to Canadian smokers: study protocol.

    PubMed

    Cunningham, John A; Leatherdale, Scott T; Selby, Peter L; Tyndale, Rachel F; Zawertailo, Laurie; Kushnir, Vladyslav

    2011-09-28

    Considerable public health efforts are ongoing Canada-wide to reduce the prevalence of smoking in the general population. From 1985 to 2005, smoking rates among adults decreased from 35% to 19%, however, since that time, the prevalence has plateaued at around 18-19%. To continue to reduce the number of smokers at the population level, one option has been to translate interventions that have demonstrated clinical efficacy into population level initiatives. Nicotine Replacement Therapy (NRT) has a considerable clinical research base demonstrating its efficacy and safety and thus public health initiatives in Canada and other countries are distributing NRT widely through the mail. However, one important question remains unanswered--do smoking cessation programs that involve mailed distribution of free NRT work? To answer this question, a randomized controlled trial is required. A single blinded, panel survey design with random assignment to an experimental and a control condition will be used in this study. A two-stage recruitment process will be employed, in the context of a general population survey with two follow-ups (8 weeks and 6 months). Random digit dialing of Canadian home telephone numbers will identify households with adult smokers (aged 18+ years) who are willing to take part in a smoking study that involves three interviews, with saliva collection for 3-HC/cotinine ratio measurement at baseline and saliva cotinine verification at 8-week and 6-month follow-ups (N = 3,000). Eligible subjects interested in free NRT will be determined at baseline (N = 1,000) and subsequently randomized into experimental and control conditions to receive versus not receive nicotine patches. The primary hypothesis is that subjects who receive nicotine patches will display significantly higher quit rates (as assessed by 30 day point prevalence of abstinence from tobacco) at 6-month follow-up as compared to subjects who do not receive nicotine patches at baseline. The findings from the proposed trial are timely and highly relevant as mailed distribution of NRT require considerable resources and there are limited public health dollars available to combat this substantial health concern. In addition, findings from this randomized controlled trial will inform the development of models to engage smokers to quit, incorporating proactive recruitment and the offer of evidence based treatment. ClinicalTrials.gov: NCT01429129.

  13. General models for the distributions of electric field gradients in disordered solids

    NASA Astrophysics Data System (ADS)

    LeCaër, G.; Brand, R. A.

    1998-11-01

    Hyperfine studies of disordered materials often yield the distribution of the electric field gradient (EFG) or related quadrupole splitting (QS). The question of the structural information that may be extracted from such distributions has been considered for more than fifteen years. Experimentally most studies have been performed using Mössbauer spectroscopy, especially on 0953-8984/10/47/020/img5. However, NMR, NQR, EPR and PAC methods have also received some attention. The EFG distribution for a random distribution of electric charges was for instance first investigated by Czjzek et al [1] and a general functional form was derived for the joint (bivariate) distribution of the principal EFG tensor component 0953-8984/10/47/020/img6 and the asymmetry parameter 0953-8984/10/47/020/img7. The importance of the Gauss distribution for such rotationally invariant structural models was thus evidenced. Extensions of that model which are based on degenerate multivariate Gauss distributions for the elements of the EFG tensor were proposed by Czjzek. The latter extensions have been used since that time, more particularly in Mössbauer spectroscopy, under the name `shell models'. The mathematical foundations of all the previous models are presented and critically discussed as they are evidenced by simple calculations in the case of the EFG tensor. The present article only focuses on those aspects of the EFG distribution in disordered solids which can be discussed without explicitly looking at particular physical mechanisms. We present studies of three different model systems. A reference model directly related to the first model of Czjzek, called the Gaussian isotropic model (GIM), is shown to be the limiting case for many different models with a large number of independent contributions to the EFG tensor and not restricted to a point-charge model. The extended validity of the marginal distribution of 0953-8984/10/47/020/img7 in the GIM model is discussed. It is also shown that the second model based on degenerate multivariate normal distributions for the EFG components yields questionable results and has been exaggeratedly used in experimental studies. The latter models are further discussed in the light of new results. The problems raised by these extensions are due to the fact that the consequences of the statistical invariance by rotation of the EFG tensor have not been sufficiently taken into account. Further difficulties arise because the structural degrees of freedom of the disordered solid under consideration have been confused with the degrees of freedom of QS distributions. The relations which are derived and discussed are further illustrated by the case of the EFG tensor distribution created at the centre of a sphere by m charges randomly distributed on its surface. The third model, a simple extension of the GIM, considers the case of an EFG tensor which is the sum of a fixed part and of a random part with variable weights. The bivariate distribution 0953-8984/10/47/020/img9 is calculated exactly in the most symmetric case and the effect of the random part is investigated as a function of its weight. The various models are more particularly discussed in connection with short-range order in disordered solids. An ambiguity problem which arises in the evaluation of bivariate distributions of centre lineshift (isomer shift) and quadrupole splitting from 0953-8984/10/47/020/img10 Mössbauer spectra is finally quantitatively considered.

  14. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    NASA Astrophysics Data System (ADS)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  15. Solid oxide fuel cell anode image segmentation based on a novel quantum-inspired fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Fu, Xiaowei; Xiang, Yuhan; Chen, Li; Xu, Xin; Li, Xi

    2015-12-01

    High quality microstructure modeling can optimize the design of fuel cells. For three-phase accurate identification of Solid Oxide Fuel Cell (SOFC) microstructure, this paper proposes a novel image segmentation method on YSZ/Ni anode Optical Microscopic (OM) images. According to Quantum Signal Processing (QSP), the proposed approach exploits a quantum-inspired adaptive fuzziness factor to adaptively estimate the energy function in the fuzzy system based on Markov Random Filed (MRF). Before defuzzification, a quantum-inspired probability distribution based on distance and gray correction is proposed, which can adaptively adjust the inaccurate probability estimation of uncertain points caused by noises and edge points. In this study, the proposed method improves accuracy and effectiveness of three-phase identification on the micro-investigation. It provides firm foundation to investigate the microstructural evolution and its related properties.

  16. Broken symmetries, zero-energy modes, and quantum transport in disordered graphene: from supermetallic to insulating regimes.

    PubMed

    Cresti, Alessandro; Ortmann, Frank; Louvet, Thibaud; Van Tuan, Dinh; Roche, Stephan

    2013-05-10

    The role of defect-induced zero-energy modes on charge transport in graphene is investigated using Kubo and Landauer transport calculations. By tuning the density of random distributions of monovacancies either equally populating the two sublattices or exclusively located on a single sublattice, all conduction regimes are covered from direct tunneling through evanescent modes to mesoscopic transport in bulk disordered graphene. Depending on the transport measurement geometry, defect density, and broken sublattice symmetry, the Dirac-point conductivity is either exceptionally robust against disorder (supermetallic state) or suppressed through a gap opening or by algebraic localization of zero-energy modes, whereas weak localization and the Anderson insulating regime are obtained for higher energies. These findings clarify the contribution of zero-energy modes to transport at the Dirac point, hitherto controversial.

  17. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems

    NASA Technical Reports Server (NTRS)

    Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  18. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems.

    PubMed

    Li, Q; Zamorano, L; Jiang, Z; Gong, J X; Pandya, A; Perez, R; Diaz, F

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  19. A scaling law for random walks on networks

    PubMed Central

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-01-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870

  20. Partial transpose of random quantum states: Exact formulas and meanders

    NASA Astrophysics Data System (ADS)

    Fukuda, Motohisa; Śniady, Piotr

    2013-04-01

    We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.

  1. A scaling law for random walks on networks

    NASA Astrophysics Data System (ADS)

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  2. A scaling law for random walks on networks.

    PubMed

    Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-14

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  3. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  4. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    PubMed Central

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  5. Typical entanglement

    NASA Astrophysics Data System (ADS)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2013-05-01

    Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.

  6. A statistical physics perspective on criticality in financial markets

    NASA Astrophysics Data System (ADS)

    Bury, Thomas

    2013-11-01

    Stock markets are complex systems exhibiting collective phenomena and particular features such as synchronization, fluctuations distributed as power-laws, non-random structures and similarity to neural networks. Such specific properties suggest that markets operate at a very special point. Financial markets are believed to be critical by analogy to physical systems, but little statistically founded evidence has been given. Through a data-based methodology and comparison to simulations inspired by the statistical physics of complex systems, we show that the Dow Jones and index sets are not rigorously critical. However, financial systems are closer to criticality in the crash neighborhood.

  7. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  8. Narrow-band generation in random distributed feedback fiber laser.

    PubMed

    Sugavanam, Srikanth; Tarasov, Nikita; Shu, Xuewen; Churkin, Dmitry V

    2013-07-15

    Narrow-band emission of spectral width down to ~0.05 nm line-width is achieved in the random distributed feedback fiber laser employing narrow-band fiber Bragg grating or fiber Fabry-Perot interferometer filters. The observed line-width is ~10 times less than line-width of other demonstrated up to date random distributed feedback fiber lasers. The random DFB laser with Fabry-Perot interferometer filter provides simultaneously multi-wavelength and narrow-band (within each line) generation with possibility of further wavelength tuning.

  9. Visual Perception of Touchdown Point During Simulated Landing

    ERIC Educational Resources Information Center

    Palmisano, Stephen; Gillam, Barbara

    2005-01-01

    Experiments examined the accuracy of visual touchdown point perception during oblique descents (1.5?-15?) toward a ground plane consisting of (a) randomly positioned dots, (b) a runway outline, or (c) a grid. Participants judged whether the perceived touchdown point was above or below a probe that appeared at a random position following each…

  10. Self-organization of complex networks as a dynamical system

    NASA Astrophysics Data System (ADS)

    Aoki, Takaaki; Yawata, Koichiro; Aoyagi, Toshio

    2015-01-01

    To understand the dynamics of real-world networks, we investigate a mathematical model of the interplay between the dynamics of random walkers on a weighted network and the link weights driven by a resource carried by the walkers. Our numerical studies reveal that, under suitable conditions, the co-evolving dynamics lead to the emergence of stationary power-law distributions of the resource and link weights, while the resource quantity at each node ceaselessly changes with time. We analyze the network organization as a deterministic dynamical system and find that the system exhibits multistability, with numerous fixed points, limit cycles, and chaotic states. The chaotic behavior of the system leads to the continual changes in the microscopic network dynamics in the absence of any external random noises. We conclude that the intrinsic interplay between the states of the nodes and network reformation constitutes a major factor in the vicissitudes of real-world networks.

  11. The trajectory of scientific discovery: concept co-occurrence and converging semantic distance.

    PubMed

    Cohen, Trevor; Schvaneveldt, Roger W

    2010-01-01

    The paradigm of literature-based knowledge discovery originated by Swanson involves finding meaningful associations between terms or concepts that have not occurred together in any previously published document. While several automated approaches have been applied to this problem, these generally evaluate the literature at a point in time, and do not evaluate the role of change over time in distributional statistics as an indicator of meaningful implicit associations. To address this issue, we develop and evaluate Symmetric Random Indexing (SRI), a novel variant of the Random Indexing (RI) approach that is able to measure implicit association over time. SRI is found to compare favorably to existing RI variants in the prediction of future direct co-occurrence. Summary statistics over several experiments suggest a trend of converging semantic distance prior to the co-occurrence of key terms for two seminal historical literature-based discoveries.

  12. Self-organization of complex networks as a dynamical system.

    PubMed

    Aoki, Takaaki; Yawata, Koichiro; Aoyagi, Toshio

    2015-01-01

    To understand the dynamics of real-world networks, we investigate a mathematical model of the interplay between the dynamics of random walkers on a weighted network and the link weights driven by a resource carried by the walkers. Our numerical studies reveal that, under suitable conditions, the co-evolving dynamics lead to the emergence of stationary power-law distributions of the resource and link weights, while the resource quantity at each node ceaselessly changes with time. We analyze the network organization as a deterministic dynamical system and find that the system exhibits multistability, with numerous fixed points, limit cycles, and chaotic states. The chaotic behavior of the system leads to the continual changes in the microscopic network dynamics in the absence of any external random noises. We conclude that the intrinsic interplay between the states of the nodes and network reformation constitutes a major factor in the vicissitudes of real-world networks.

  13. IS THE SUICIDE RATE A RANDOM WALK?

    PubMed

    Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert

    2015-06-01

    The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.

  14. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  15. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials.

    PubMed

    Machado, L A C; Kamper, S J; Herbert, R D; Maher, C G; McAuley, J H

    2009-05-01

    Estimates of treatment effects reported in placebo-controlled randomized trials are less subject to bias than those estimates provided by other study designs. The objective of this meta-analysis was to estimate the analgesic effects of treatments for non-specific low back pain reported in placebo-controlled randomized trials. Medline, Embase, Cinahl, PsychInfo and Cochrane Central Register of Controlled Trials databases were searched for eligible trials from earliest records to November 2006. Continuous pain outcomes were converted to a common 0-100 scale and pooled using a random effects model. A total of 76 trials reporting on 34 treatments were included. Fifty percent of the investigated treatments had statistically significant effects, but for most the effects were small or moderate: 47% had point estimates of effects of <10 points on the 100-point scale, 38% had point estimates from 10 to 20 points and 15% had point estimates of >20 points. Treatments reported to have large effects (>20 points) had been investigated only in a single trial. This meta-analysis revealed that the analgesic effects of many treatments for non-specific low back pain are small and that they do not differ in populations with acute or chronic symptoms.

  16. The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics

    NASA Technical Reports Server (NTRS)

    Jahshan, S. N.; Singleterry, R. C.

    2001-01-01

    The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, , is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.

  17. RAQ–A Random Forest Approach for Predicting Air Quality in Urban Sensing Systems

    PubMed Central

    Yu, Ruiyun; Yang, Yu; Yang, Leyou; Han, Guangjie; Move, Oguti Ann

    2016-01-01

    Air quality information such as the concentration of PM2.5 is of great significance for human health and city management. It affects the way of traveling, urban planning, government policies and so on. However, in major cities there is typically only a limited number of air quality monitoring stations. In the meantime, air quality varies in the urban areas and there can be large differences, even between closely neighboring regions. In this paper, a random forest approach for predicting air quality (RAQ) is proposed for urban sensing systems. The data generated by urban sensing includes meteorology data, road information, real-time traffic status and point of interest (POI) distribution. The random forest algorithm is exploited for data training and prediction. The performance of RAQ is evaluated with real city data. Compared with three other algorithms, this approach achieves better prediction precision. Exciting results are observed from the experiments that the air quality can be inferred with amazingly high accuracy from the data which are obtained from urban sensing. PMID:26761008

  18. Universality of long-range correlations in expansion randomization systems

    NASA Astrophysics Data System (ADS)

    Messer, P. W.; Lässig, M.; Arndt, P. F.

    2005-10-01

    We study the stochastic dynamics of sequences evolving by single-site mutations, segmental duplications, deletions, and random insertions. These processes are relevant for the evolution of genomic DNA. They define a universality class of non-equilibrium 1D expansion-randomization systems with generic stationary long-range correlations in a regime of growing sequence length. We obtain explicitly the two-point correlation function of the sequence composition and the distribution function of the composition bias in sequences of finite length. The characteristic exponent χ of these quantities is determined by the ratio of two effective rates, which are explicitly calculated for several specific sequence evolution dynamics of the universality class. Depending on the value of χ, we find two different scaling regimes, which are distinguished by the detectability of the initial composition bias. All analytic results are accurately verified by numerical simulations. We also discuss the non-stationary build-up and decay of correlations, as well as more complex evolutionary scenarios, where the rates of the processes vary in time. Our findings provide a possible example for the emergence of universality in molecular biology.

  19. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  20. Optic fiber sensor-based smart bridge cable with functionality of self-sensing

    NASA Astrophysics Data System (ADS)

    He, Jianping; Zhou, Zhi; Jinping, Ou

    2013-02-01

    Bridge cables, characterized by distributed large span, serving in harsh environment and vulnerability to random damage, are the key load-sustaining components of cable-based bridges. To ensure the safety of the bridge structure, it is critical to monitor the loading conditions of these cables under lengthwise random damages. Aiming at obtaining accurate monitoring at the critical points as well as the general information of the cable force distributed along the entire cable, this paper presents a study on cable force monitoring by combining optical fiber Bragg grating (FBG) sensors and Brillouin optical time domain analysis/reflectory (BOTDA/R) sensing technique in one single optical fiber. A smart FRP-OF-FBG rebar based cable was fabricated by protruding a FRP packaged OF-FBG sensor into the bridge cable. And its sensing characteristics, stability under high stress state temperature self-compensation as well as BOTDA/R distributed data improvement by local FBG sensors have been investigated. The results show that FRP-OF-FBG rebar in the smart cable can deform consistantly along with the steel wire and the cable force obtained from the optical fiber sensors agree well with theoretical value with relative error less than ±5%. Besides, the temperature self-compensation method provides a significant cost-effective technique for the FRP-OF-FBG based cables' in situ cable force measurement. And furthermore, potential damages of the bridge cable, e.g. wire breaking and corrosion, can be characterized and symbolized by the discontinuity and fluctuation of the distributed BOTDA data thereafter accuracy improved by local FBG sensors.

  1. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  2. Hybrid Percolation Transition in Cluster Merging Processes: Continuously Varying Exponents

    NASA Astrophysics Data System (ADS)

    Cho, Y. S.; Lee, J. S.; Herrmann, H. J.; Kahng, B.

    2016-01-01

    Consider growing a network, in which every new connection is made between two disconnected nodes. At least one node is chosen randomly from a subset consisting of g fraction of the entire population in the smallest clusters. Here we show that this simple strategy for improving connection exhibits a more unusual phase transition, namely a hybrid percolation transition exhibiting the properties of both first-order and second-order phase transitions. The cluster size distribution of finite clusters at a transition point exhibits power-law behavior with a continuously varying exponent τ in the range 2 <τ (g )≤2.5 . This pattern reveals a necessary condition for a hybrid transition in cluster aggregation processes, which is comparable to the power-law behavior of the avalanche size distribution arising in models with link-deleting processes in interdependent networks.

  3. Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling

    NASA Astrophysics Data System (ADS)

    Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.

    2011-12-01

    Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.

  4. Quantifying evenly distributed states in exclusion and nonexclusion processes

    NASA Astrophysics Data System (ADS)

    Binder, Benjamin J.; Landman, Kerry A.

    2011-04-01

    Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.

  5. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  6. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.

  7. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  8. Effect of Deploying Trained Community Based Reproductive Health Nurses (CORN) on Long-Acting Reversible Contraception (LARC) Use in Rural Ethiopia: A Cluster Randomized Community Trial.

    PubMed

    Zerfu, Taddese Alemu; Ayele, Henok Taddese; Bogale, Tariku Nigatu

    2018-06-01

    To investigate the effect of innovative means to distribute LARC on contraceptive use, we implemented a three arm, parallel groups, cluster randomized community trial design. The intervention consisted of placing trained community-based reproductive health nurses (CORN) within health centers or health posts. The nurses provided counseling to encourage women to use LARC and distributed all contraceptive methods. A total of 282 villages were randomly selected and assigned to a control arm (n = 94) or 1 of 2 treatment arms (n = 94 each). The treatment groups differed by where the new service providers were deployed, health post or health center. We calculated difference-in-difference (DID) estimates to assess program impacts on LARC use. After nine months of intervention, the use of LARC methods increased significantly by 72.3 percent, while the use of short acting methods declined by 19.6 percent. The proportion of women using LARC methods increased by 45.9 percent and 45.7 percent in the health post and health center based intervention arms, respectively. Compared to the control group, the DID estimates indicate that the use of LARC methods increased by 11.3 and 12.3 percentage points in the health post and health center based intervention arms. Given the low use of LARC methods in similar settings, deployment of contextually trained nurses at the grassroots level could substantially increase utilization of these methods. © 2018 The Population Council, Inc.

  9. Mapping risk of plague in Qinghai-Tibetan Plateau, China.

    PubMed

    Qian, Quan; Zhao, Jian; Fang, Liqun; Zhou, Hang; Zhang, Wenyi; Wei, Lan; Yang, Hong; Yin, Wenwu; Cao, Wuchun; Li, Qun

    2014-07-10

    Qinghai-Tibetan Plateau of China is known to be the plague endemic region where marmot (Marmota himalayana) is the primary host. Human plague cases are relatively low incidence but high mortality, which presents unique surveillance and public health challenges, because early detection through surveillance may not always be feasible and infrequent clinical cases may be misdiagnosed. Based on plague surveillance data and environmental variables, Maxent was applied to model the presence probability of plague host. 75% occurrence points were randomly selected for training model, and the rest 25% points were used for model test and validation. Maxent model performance was measured as test gain and test AUC. The optimal probability cut-off value was chosen by maximizing training sensitivity and specificity simultaneously. We used field surveillance data in an ecological niche modeling (ENM) framework to depict spatial distribution of natural foci of plague in Qinghai-Tibetan Plateau. Most human-inhabited areas at risk of exposure to enzootic plague are distributed in the east and south of the Plateau. Elevation, temperature of land surface and normalized difference vegetation index play a large part in determining the distribution of the enzootic plague. This study provided a more detailed view of spatial pattern of enzootic plague and human-inhabited areas at risk of plague. The maps could help public health authorities decide where to perform plague surveillance and take preventive measures in Qinghai-Tibetan Plateau.

  10. Closer look at time averages of the logistic map at the edge of chaos

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian

    2009-05-01

    The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.

  11. Micromechanical analysis of composites with fibers distributed randomly over the transverse cross-section

    NASA Astrophysics Data System (ADS)

    Weng, Jingmeng; Wen, Weidong; Cui, Haitao; Chen, Bo

    2018-06-01

    A new method to generate the random distribution of fibers in the transverse cross-section of fiber reinforced composites with high fiber volume fraction is presented in this paper. Based on the microscopy observation of the transverse cross-sections of unidirectional composite laminates, hexagon arrangement is set as the initial arrangement status, and the initial velocity of each fiber is arbitrary at an arbitrary direction, the micro-scale representative volume element (RVE) is established by simulating perfectly elastic collision. Combined with the proposed periodic boundary conditions which are suitable for multi-axial loading, the effective elastic properties of composite materials can be predicted. The predicted properties show reasonable agreement with experimental results. By comparing the stress field of RVE with fibers distributed randomly and RVE with fibers distributed periodically, the predicted elastic modulus of RVE with fibers distributed randomly is greater than RVE with fibers distributed periodically.

  12. The noisy voter model on complex networks.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-20

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity--variance of the underlying degree distribution--has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  13. Characterizing ISI and sub-threshold membrane potential distributions: Ensemble of IF neurons with random squared-noise intensity.

    PubMed

    Kumar, Sanjeev; Karmeshu

    2018-04-01

    A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  15. Experimental investigation of leak detection using mobile distributed monitoring system

    NASA Astrophysics Data System (ADS)

    Chen, Jiang; Zheng, Junli; Xiong, Feng; Ge, Qi; Yan, Qixiang; Cheng, Fei

    2018-01-01

    The leak detection of rockfill dams is currently hindered by spatial and temporal randomness and wide monitoring range. The spatial resolution of fiber Bragg grating (FBG) temperature sensing technology is related to the distance between measuring points. As a result, the number of measuring points should be increased to ensure that the precise location of the leak is detected. However, this leads to a higher monitoring cost. Consequently, it is difficult to promote and apply this technology to effectively monitor rockfill dam leakage. In this paper, a practical mobile distributed monitoring system with dual-tubes is used by combining the FBG sensing system and hydrothermal cycling system. This dual-tube structure is composed of an outer polyethylene of raised temperature resistance heating pipe, an inner polytetrafluoroethylene tube, and a FBG sensor string, among which, the FBG sensor string can be dragged freely in the internal tube to change the position of the measuring points and improve the spatial resolution. In order to test the effectiveness of the system, the large-scale model test of concentrated leakage in 13 working conditions is carried out by identifying the location, quantity, and leakage rate of leakage passage. Based on Newton’s law of cooling, the leakage state is identified using the seepage identification index ζ v that was confirmed according to the cooling curve. Results suggested that the monitoring system shows high sensitivity and can improve the spatial resolution with limited measuring points, and thus better locate the leakage area. In addition, the seepage identification index ζ v correlated well with the leakage rate qualitatively.

  16. Randomized Hough transform filter for echo extraction in DLR

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You

    2016-11-01

    The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.

  17. Pores-scale hydrodynamics in a progressively bio-clogged three-dimensional porous medium: 3D particle tracking experiments and stochastic transport modelling

    NASA Astrophysics Data System (ADS)

    Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2017-12-01

    Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.

  18. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  19. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  20. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    NASA Astrophysics Data System (ADS)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the model.

  1. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  2. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    PubMed

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  3. 16 CFR 305.19 - Promotional material displayed or distributed at point of sale.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... distributed at point of sale. 305.19 Section 305.19 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... distributed at point of sale. (a)(1) Any manufacturer, distributor, retailer or private labeler who prepares printed material for display or distribution at point of sale concerning a covered product (except...

  4. Extensions to the visual predictive check to facilitate model performance evaluation.

    PubMed

    Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert

    2008-04-01

    The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.

  5. Resonant paramagnetic enhancement of the thermal and zero-point Nyquist noise

    NASA Astrophysics Data System (ADS)

    França, H. M.; Santos, R. B. B.

    1999-01-01

    The interaction between a very thin macroscopic solenoid, and a single magnetic particle precessing in a external magnetic field B0, is described by taking into account the thermal and the zero-point fluctuations of stochastic electrodynamics. The inductor belongs to a RLC circuit without batteries and the random motion of the magnetic dipole generates in the solenoid a fluctuating current Idip( t), and a fluctuating voltage εdip( t), with spectral distribution quite different from the Nyquist noise. We show that the mean square value < Idip2> presents an enormous variation when the frequency of precession approaches the frequency of the circuit, but it is still much smaller than the Nyquist current in the circuit. However, we also show that < Idip2> can reach measurable values if the inductor is interacting with a macroscopic sample of magnetic particles (atoms or nuclei) which are close enough to its coils.

  6. Dual frequency scatterometer measurement of ocean wave height

    NASA Technical Reports Server (NTRS)

    Johnson, J. W.; Jones, W. L.; Swift, C. T.; Grantham, W. L.; Weissman, D. E.

    1975-01-01

    A technique for remotely measuring wave height averaged over an area of the sea surface was developed and verified with a series of aircraft flight experiments. The measurement concept involves the cross correlation of the amplitude fluctuations of two monochromatic reflected signals with variable frequency separation. The signal reflected by the randomly distributed specular points on the surface is observed in the backscatter direction at nadir incidence angle. The measured correlation coefficient is equal to the square of the magnitude of the characteristic function of the specular point height from which RMS wave height can be determined. The flight scatterometer operates at 13.9 GHz and 13.9 - delta f GHz with a maximum delta f of 40 MHz. Measurements were conducted for low and moderate sea states at altitudes of 2, 5, and 10 thousand feet. The experimental results agree with the predicted decorrelation with frequency separation and with off-nadir incidence angle.

  7. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  8. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  9. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  10. A non-equilibrium neutral model for analysing cultural change.

    PubMed

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    PubMed

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  12. Effects of military training activities on shrub-steppe raptors in southwestern Idaho, USA

    USGS Publications Warehouse

    Lehman, Robert N.; Steenhof, Karen; Kochert, Michael N.; Carpenter, L.B.

    1999-01-01

    Between 1991 and 1994, we assessed relative abundance, nesting success, and distribution of ferruginous hawks (Buteo regalis), northern harriers (Circus cyaneus), burrowing owls (Athene cunicularia), and short-eared owls (Asio flammeus) inside and outside a military training site in the Snake River Birds of Prey National Conservation Area, southwestern Idaho. The Orchard Training Area is used primarily for armored vehicle training and artillery firing by the Idaho Army National Guard. Relative abundance of nesting pairs inside and outside the training site was not significantly different from 1991 to 1993 but was significantly higher on the training site in 1994 (Pa??a??a??0.03). Nesting success varied among years but was not significantly different inside and outside the training site (Pa??>a??0.26). In 1994, short-eared owl and burrowing owl nests were significantly closer to firing ranges used early in the spring before owls laid eggs than were random points (Pa??

  13. Progress in characterizing submonolayer island growth: Capture-zone distributions, growth exponents, & hot precursors

    NASA Astrophysics Data System (ADS)

    Einstein, Theodore L.; Pimpinelli, Alberto; González, Diego Luis; Morales-Cifuentes, Josue R.

    2015-09-01

    In studies of epitaxial growth, analysis of the distribution of the areas of capture zones (i.e. proximity polygons or Voronoi tessellations with respect to island centers) is often the best way to extract the critical nucleus size i. For non-random nucleation the normalized areas s of these Voronoi cells are well described by the generalized Wigner distribution (GWD) Pβ(s) = asβ exp(-bs2), particularly in the central region 0.5 < s < 2 where data are least noisy. Extensive Monte Carlo simulations reveal inadequacies of our earlier mean field analysis, suggesting β = i + 2 for diffusion-limited aggregation (DLA). Since simulations generate orders of magnitude more data than experiments, they permit close examination of the tails of the distribution, which differ from the simple GWD form. One refinement is based on a fragmentation model. We also compare island-size distributions. We compare analysis by island-size distribution and by scaling of island density with flux. Modifications appear for attach-limited aggregation (ALA). We focus on the experimental system para-hexaphenyl on amorphous mica, comparing the results of the three analysis techniques and reconciling their results via a novel model of hot precursors based on rate equations, pointing out the existence of intermediate scaling regimes between DLA and ALA.

  14. Simple point vortex model for the relaxation of 2D superfluid turbulence in a Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il

    2016-05-01

    In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.

  15. Simulation of the mechanical behavior of random fiber networks with different microstructure.

    PubMed

    Hatami-Marbini, H

    2018-05-24

    Filamentous protein networks are broadly encountered in biological systems such as cytoskeleton and extracellular matrix. Many numerical studies have been conducted to better understand the fundamental mechanisms behind the striking mechanical properties of these networks. In most of these previous numerical models, the Mikado algorithm has been used to represent the network microstructure. Here, a different algorithm is used to create random fiber networks in order to investigate possible roles of architecture on the elastic behavior of filamentous networks. In particular, random fibrous structures are generated from the growth of individual fibers from random nucleation points. We use computer simulations to determine the mechanical behavior of these networks in terms of their model parameters. The findings are presented and discussed along with the response of Mikado fiber networks. We demonstrate that these alternative networks and Mikado networks show a qualitatively similar response. Nevertheless, the overall elasticity of Mikado networks is stiffer compared to that of the networks created using the alternative algorithm. We describe the effective elasticity of both network types as a function of their line density and of the material properties of the filaments. We also characterize the ratio of bending and axial energy and discuss the behavior of these networks in terms of their fiber density distribution and coordination number.

  16. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  17. Scanning the skeleton of the 4D F-theory landscape

    NASA Astrophysics Data System (ADS)

    Taylor, Washington; Wang, Yi-Nan

    2018-01-01

    Using a one-way Monte Carlo algorithm from several different starting points, we get an approximation to the distribution of toric threefold bases that can be used in four-dimensional F-theory compactification. We separate the threefold bases into "resolvable" ones where the Weierstrass polynomials ( f, g) can vanish to order (4 , 6) or higher on codimension-two loci and the "good" bases where these (4 , 6) loci are not allowed. A simple estimate suggests that the number of distinct resolvable base geometries exceeds 103000, with over 10250 "good" bases, though the actual numbers are likely much larger. We find that the good bases are concentrated at specific "end points" with special isolated values of h 1,1 that are bigger than 1,000. These end point bases give Calabi-Yau fourfolds with specific Hodge numbers mirror to elliptic fibrations over simple threefolds. The non-Higgsable gauge groups on the end point bases are almost entirely made of products of E 8, F 4, G 2 and SU(2). Nonetheless, we find a large class of good bases with a single non-Higgsable SU(3). Moreover, by randomly contracting the end point bases, we find many resolvable bases with h 1,1( B) ˜ 50-200 that cannot be contracted to another smooth threefold base.

  18. An efficient algorithm for the generalized Foldy-Lax formulation

    NASA Astrophysics Data System (ADS)

    Huang, Kai; Li, Peijun; Zhao, Hongkai

    2013-02-01

    Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.

  19. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  20. Current-wave spectra coupling project. Volume III. Cumulative distribution of forces on structures subjected to the combined action of currents and random waves for potential OTEC sites: (A) Keahole Point, Hawaii, 100 year hurricane; (B) Punta Tuna, Puerto Rico, 100 year hurricane; (C) New Orleans, Louisiana, 100 year hurricane; (D) West Coast of Florida, 100 year hurricane. [CUFOR code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venezian, G.; Bretschneider, C.L.

    1980-08-01

    This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less

  1. Randomness determines practical security of BB84 quantum key distribution.

    PubMed

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-10

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  2. Randomness determines practical security of BB84 quantum key distribution

    PubMed Central

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-01-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system. PMID:26552359

  3. Randomness determines practical security of BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  4. The effects of noise due to random undetected tilts and paleosecular variation on regional paleomagnetic directions

    USGS Publications Warehouse

    Calderone, G.J.; Butler, R.F.

    1991-01-01

    Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors

  5. Quantitative comparison of the application accuracy between NDI and IGT tracking systems

    NASA Astrophysics Data System (ADS)

    Li, Qinghang; Zamorano, Lucia J.; Jiang, Charlie Z. W.; Gong, JianXing; Diaz, Fernando

    1999-07-01

    The application accuracy is a crucial factor for the stereotactic surgical localization system in which space digitization system is one of the most important part of equipment. In this study we compared the application accuracy of using the OPTOTRAK space digitization system (OPTOTRAK 3020, Northern Digital, Waterloo, CAN) and FlashPoint Model 3000 and 5000 3-D digitizer systems (FlashPoint Model 3000 and 5000, Image Guided Surgery Technology Inc., Boulder, CO 80301, USA) for interactive localization of intracranial lesions. A phantom was mounted with the implantable frameless marker system (Fischer- Leibinger, Freiburg, Germany) which randomly distributed markers on the surface of the phantom. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points were used as the deviation from the `true point'. The mean square root was calculated to show the sum of vectors. A paired t-test was used to analyze results. The results of the phantom showed that the mean square roots were 0.76 +/- 0.54 mm for the OPTOTRAK system and 1.23 +/- 0.53 mm for FlashPoint Model 3000 3-D digitizer system and 1.00 +/- 0.42 mm for FlashPoint Model 3000 3-D digitizer system in the 1 mm sections of CT scan. This preliminary results showed that there is no significant difference between two tracking systems. Both of them can be used for image guided surgery procedure.

  6. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  7. Mapping risk for nest predation on a barrier island

    USGS Publications Warehouse

    Hackney, Amanda D.; Baldwin, Robert F.; Jodice, Patrick G.R.

    2013-01-01

    Barrier islands and coastal beach systems provide nesting habitat for marine and estuarine turtles. Densely settled coastal areas may subsidize nest predators. Our purpose was to inform conservation by providing a greater understanding of habitat-based risk factors for nest predation, for an estuarine turtle. We expected that habitat conditions at predated nests would differ from random locations at two spatial extents. We developed and validated an island-wide model for the distribution of predated Diamondback terrapin nests using locations of 198 predated nests collected during exhaustive searches at Fisherman Island National Wildlife Refuge, USA. We used aerial photographs to identify all areas of possible nesting habitat and searched each and surrounding environments for nests, collecting location and random-point microhabitat data. We built models for the probability of finding a predated nest using an equal number of random points and validated them with a reserve set (N = 67). Five variables in 9 a priori models were used and the best selected model (AIC weight 0.98) reflected positive associations with sand patches near marshes and roadways. Model validation had an average capture rate of predated nests of 84.14 % (26.17–97.38 %, Q1 77.53 %, median 88.07 %, Q3 95.08 %). Microhabitat selection results suggest that nests placed at the edges of sand patches adjacent to upland shrub/forest and marsh systems are vulnerable to predation. Forests and marshes provide cover and alternative resources for predators and roadways provide access; a suggestion is to focus nest protection efforts on the edges of dunes, near dense vegetation and roads.

  8. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    NASA Astrophysics Data System (ADS)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would explore such large dimensional, degenerate landscapes to locate a solution with decent generalization properties. Finally, a demonstration of how the new method can explain the empirical success of some of the recent methods that have been proposed for distributed deep learning. In the second part, two applied machine learning problems are studied that are complementary to the machine learning problems of part I. First, US asylum applications cases are studied using random forests on the data of past twenty years. Using only features up to when the case opens, the algorithm can predict the outcome of the case with 80% accuracy. Next, a particular question and answer system has been studied. The questions are collected from Jeopardy! show and they fed to Google, then the results are parsed into a recurrent neural network to come up with a system that would outcome the answer to the original question. Close to 50% accuracy is achieved where human level benchmark is just a little above 60%.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nahrgang, Marlene; Bluhm, Marcus; Alba, Paolo

    We investigate net proton fluctuations as important observables measured in heavy-ion collisions within the hadron resonance gas (HRG) model. Special emphasis is given to effects which are a priori not inherent in a thermally and chemically equilibrated HRG approach. In particular, we point out the importance of taking into account the successive regeneration and decay of resonances after the chemical freeze-out, which lead to a randomization of the isospin of nucleons and thus to additional fluctuations in the net proton number. In conclusion, we find good agreement between our model results and the recent STAR measurements of the higher-order momentsmore » of the net proton distribution.« less

  10. Time is Money

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel; Vandewalle, Nicolas; Ivanova, Kristinka

    Specialized topics on financial data analysis from a numerical and physical point of view are discussed when pertaining to the analysis of coherent and random sequences in financial fluctuations within (i) the extended detrended fluctuation analysis method, (ii) multi-affine analysis technique, (iii) mobile average intersection rules and distributions, (iv) sandpile avalanches models for crash prediction, (v) the (m,k)-Zipf method and (vi) the i-variability diagram technique for sorting out short range correlations. The most baffling result that needs further thought from mathematicians and physicists is recalled: the crossing of two mobile averages is an original method for measuring the "signal" roughness exponent, but why it is so is not understood up to now.

  11. Criticality of Adaptive Control Dynamics

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Pawelzik, Klaus

    2011-12-01

    We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.

  12. Critical exponents of the disorder-driven superfluid-insulator transition in one-dimensional Bose-Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cestari, J. C. C.; Foerster, A.; Gusmao, M. A.

    2011-11-15

    We investigate the nature of the superfluid-insulator quantum phase transition driven by disorder for noninteracting ultracold atoms on one-dimensional lattices. We consider two different cases: Anderson-type disorder, with local energies randomly distributed, and pseudodisorder due to a potential incommensurate with the lattice, which is usually called the Aubry-Andre model. A scaling analysis of numerical data for the superfluid fraction for different lattice sizes allows us to determine quantum critical exponents characterizing the disorder-driven superfluid-insulator transition. We also briefly discuss the effect of interactions close to the noninteracting quantum critical point of the Aubry-Andre model.

  13. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  14. Bivariate- distribution for transition matrix elements in Breit-Wigner to Gaussian domains of interacting particle systems.

    PubMed

    Kota, V K B; Chavda, N D; Sahu, R

    2006-04-01

    Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.

  15. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  16. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  17. Water security-National and global issues

    USGS Publications Warehouse

    Tindall, James A.; Campbell, Andrew A.

    2010-01-01

    Potable or clean freshwater availability is crucial to life and economic, environmental, and social systems. The amount of freshwater is finite and makes up approximately 2.5 percent of all water on the Earth. Freshwater supplies are small and randomly distributed, so water resources can become points of conflict. Freshwater availability depends upon precipitation patterns, changing climate, and whether the source of consumed water comes directly from desalination, precipitation, or surface and (or) groundwater. At local to national levels, difficulties in securing potable water sources increase with growing populations and economies. Available water improves living standards and drives urbanization, which increases average water consumption per capita. Commonly, disruptions in sustainable supplies and distribution of potable water and conflicts over water resources become major security issues for Government officials. Disruptions are often influenced by land use, human population, use patterns, technological advances, environmental impacts, management processes and decisions, transnational boundaries, and so forth.

  18. Geochemical surveys in the United States in relation to health.

    USGS Publications Warehouse

    Tourtelot, H.A.

    1979-01-01

    Geochemical surveys in relation to health may be classified as having one, two or three dimensions. One-dimensional surveys examine relations between concentrations of elements such as Pb in soils and other media and burdens of the same elements in humans, at a given time. The spatial distributions of element concentrations are not investigated. The primary objective of two-dimensional surveys is to map the distributions of element concentrations, commonly according to stratified random sampling designs based on either conceptual landscape units or artificial sampling strata, but systematic sampling intervals have also been used. Political units have defined sample areas that coincide with the units used to accumulate epidemiological data. Element concentrations affected by point sources have also been mapped. Background values, location of natural or technological anomalies and the geographic scale of variation for several elements often are determined. Three-dimensional surveys result when two-dimensional surveys are repeated to detect environmental changes. -Author

  19. Leveraging ecological theory to guide natural product discovery.

    PubMed

    Smanski, Michael J; Schlatter, Daniel C; Kinkel, Linda L

    2016-03-01

    Technological improvements have accelerated natural product (NP) discovery and engineering to the point that systematic genome mining for new molecules is on the horizon. NP biosynthetic potential is not equally distributed across organisms, environments, or microbial life histories, but instead is enriched in a number of prolific clades. Also, NPs are not equally abundant in nature; some are quite common and others markedly rare. Armed with this knowledge, random 'fishing expeditions' for new NPs are increasingly harder to justify. Understanding the ecological and evolutionary pressures that drive the non-uniform distribution of NP biosynthesis provides a rational framework for the targeted isolation of strains enriched in new NP potential. Additionally, ecological theory leads to testable hypotheses regarding the roles of NPs in shaping ecosystems. Here we review several recent strain prioritization practices and discuss the ecological and evolutionary underpinnings for each. Finally, we offer perspectives on leveraging microbial ecology and evolutionary biology for future NP discovery.

  20. The number statistics and optimal history of non-equilibrium steady states of mortal diffusing particles

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch

    2015-05-01

    Suppose that a point-like steady source at x = 0 injects particles into a half-infinite line. The particles diffuse and die. At long times a non-equilibrium steady state sets in, and we assume that it involves many particles. If the particles are non-interacting, their total number N in the steady state is Poisson-distributed with mean \\bar{N} predicted from a deterministic reaction-diffusion equation. Here we determine the most likely density history of this driven system conditional on observing a given N. We also consider two prototypical examples of interacting diffusing particles: (i) a family of mortal diffusive lattice gases with constant diffusivity (as illustrated by the simple symmetric exclusion process with mortal particles), and (ii) random walkers that can annihilate in pairs. In both examples we calculate the variances of the (non-Poissonian) stationary distributions of N.

  1. INFRARED OBSERVATIONAL MANIFESTATIONS OF YOUNG DUSTY SUPER STAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-González, Sergio; Tenorio-Tagle, Guillermo; Silich, Sergiy, E-mail: sergiomtz@inaoep.mx

    The growing evidence pointing at core-collapse supernovae as large dust producers makes young massive stellar clusters ideal laboratories to study the evolution of dust immersed in a hot plasma. Here we address the stochastic injection of dust by supernovae, and follow its evolution due to thermal sputtering within the hot and dense plasma generated by young stellar clusters. Under these considerations, dust grains are heated by means of random collisions with gas particles which result in the appearance of  infrared spectral signatures. We present time-dependent infrared spectral energy distributions that are to be expected from young stellar clusters. Our results aremore » based on hydrodynamic calculations that account for the stochastic injection of dust by supernovae. These also consider gas and dust radiative cooling, stochastic dust temperature fluctuations, the exit of dust grains out of the cluster volume due to the cluster wind, and a time-dependent grain size distribution.« less

  2. Are there Benefits to Combining Regional Probabalistic Survey and Historic Targeted Environmental Monitoring Data to Improve Our Understanding of Overall Regional Estuary Environmental Status?

    NASA Astrophysics Data System (ADS)

    Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.

    2016-02-01

    A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.

  3. Hotspots in Hindsight

    NASA Astrophysics Data System (ADS)

    Julian, B. R.; Foulger, G. R.; Hatfield, O.; Jackson, S.; Simpson, E.; Einbeck, J.; Moore, A.

    2014-12-01

    Torsvik et al. [2006] suggest that the original locations of large igneous provinces ("LIPs") and kimberlites, and current locations of melting anomalies (hot-spots) lie preferentially above the margins of two Large Lower-Mantle Shear Velocity Provinces" (LLSVPs), at the base of the mantle, and that the correlation has a high significance level (> 99.9999%). They conclude the LLSVP margins are Plume-Generation Zones, and deep-mantle plumes cause hotspots and LIPs. This conclusion raises questions about what physical processes could be responsible, because, for example the LLSVPs are likely dense and not abnormally hot [Trampert et al., 2004]. The supposed LIP-hotspot-LLSVP correlations probably are examples of the "Hindsight Heresy" [Acton, 1959], of basing a statistical test upon the same data sample that led to the initial formulation of a hypothesis. In doing this, many competing hypotheses will have been considered and rejected, but this fact will not be taken into account in statistical assessments. Furthermore, probabilities will be computed for many subsets and combinations of the data, and the best-correlated cases will be cited, but this fact will not be taken into account either. Tests using independent hot-spot catalogs and mantle models suggest that the actual significance levels of the correlations are two or three orders of magnitude smaller than claimed. These tests also show that hot spots correlate well with presumably shallowly rooted features such as spreading plate boundaries. Consideration of the kimberlite dataset in the context of geological setting suggests that their apparent association with the LLSVP margins results from the fact that the Kaapvaal craton, the site of most of the kimberlites considered, lies in Southern Africa. These observations raise questions about the distinction between correlation and causation and underline the necessity to take geological factors into account. Fig: Left: Cumulative distributions of distances from hotspots to nearest ridge for 5 hotspot lists; heavy red curve: Distribution function for a random point on Earth's surface. Hotspots are closer to ridges than expected at random. Right: For each list, the probability of at least as many random points being as close to a ridge. Values to right have higher significance.

  4. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  5. The invariant statistical rule of aerosol scattering pulse signal modulated by random noise

    NASA Astrophysics Data System (ADS)

    Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua

    2010-11-01

    A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.

  6. External calibration of polarimetric radars using point and distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-01-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  7. External calibration of polarimetric radars using point and distributed targets

    NASA Astrophysics Data System (ADS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-08-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  8. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  9. Statistical characterization of spatial patterns of rainfall cells in extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Bacchi, Baldassare; Ranzi, Roberto; Borga, Marco

    1996-11-01

    The assumption of a particular type of distribution of rainfall cells in space is needed for the formulation of several space-time rainfall models. In this study, weather radar-derived rain rate maps are employed to evaluate different types of spatial organization of rainfall cells in storms through the use of distance functions and second-moment measures. In particular the spatial point patterns of the local maxima of rainfall intensity are compared to a completely spatially random (CSR) point process by applying an objective distance measure. For all the analyzed radar maps the CSR assumption is rejected, indicating that at the resolution of the observation considered, rainfall cells are clustered. Therefore a theoretical framework for evaluating and fitting alternative models to the CSR is needed. This paper shows how the "reduced second-moment measure" of the point pattern can be employed to estimate the parameters of a Neyman-Scott model and to evaluate the degree of adequacy to the experimental data. Some limitations of this theoretical framework, and also its effectiveness, in comparison to the use of scaling functions, are discussed.

  10. Determining the Number of Clusters in a Data Set Without Graphical Interpretation

    NASA Technical Reports Server (NTRS)

    Aguirre, Nathan S.; Davies, Misty D.

    2011-01-01

    Cluster analysis is a data mining technique that is meant ot simplify the process of classifying data points. The basic clustering process requires an input of data points and the number of clusters wanted. The clustering algorithm will then pick starting C points for the clusters, which can be either random spatial points or random data points. It then assigns each data point to the nearest C point where "nearest usually means Euclidean distance, but some algorithms use another criterion. The next step is determining whether the clustering arrangement this found is within a certain tolerance. If it falls within this tolerance, the process ends. Otherwise the C points are adjusted based on how many data points are in each cluster, and the steps repeat until the algorithm converges,

  11. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  12. Development of Curie point switching for thin film, random access, memory device

    NASA Technical Reports Server (NTRS)

    Lewicki, G. W.; Tchernev, D. I.

    1967-01-01

    Managanese bismuthide films are used in the development of a random access memory device of high packing density and nondestructive readout capability. Memory entry is by Curie point switching using a laser beam. Readout is accomplished by microoptical or micromagnetic scanning.

  13. Health impact and cost-effectiveness of a private sector bed net distribution: experimental evidence from Zambia.

    PubMed

    Sedlmayr, Richard; Fink, Günther; Miller, John M; Earle, Duncan; Steketee, Richard W

    2013-03-18

    Relatively few programmes have attempted to actively engage the private sector in national malaria control efforts. This paper evaluates the health impact of a large-scale distribution of insecticide-treated nets (ITNs) conducted in partnership with a Zambian agribusiness, and its cost-effectiveness from the perspective of the National Malaria Control Programme (NMCP). The study was designed as a cluster-randomized controlled trial. A list of 81,597 cotton farmers was obtained from Dunavant, a contract farming company in Zambia's cotton sector, in December 2010. 39,963 (49%) were randomly selected to obtain one ITN each. Follow-up interviews were conducted with 438 farmers in the treatment and 458 farmers in the control group in June and July 2011. Treatment and control households were compared with respect to bed net ownership, bed net usage, self-reported fever, and self-reported confirmed malaria. Cost data was collected throughout the programme. The distribution effectively reached target beneficiaries, with approximately 95% of households in the treatment group reporting that they had received an ITN through the programme. The average increase in the fraction of household members sleeping under an ITN the night prior to the interview was 14.6 percentage points (p-value <0.001). Treatment was associated with a 42 percent reduction in the odds of self-reported fever (p-value <0.001) and with a 49 percent reduction in the odds of self-reported malaria (p-value 0.002). This was accomplished at a cost of approximately five US$ per ITN to Zambia's NMCP. The results illustrate that existing private sector networks can efficiently control malaria in remote rural regions. The intra-household allocation of ITNs distributed through this channel was comparable to that of ITNs received from other sources, and the health impact remained substantial.

  14. Distribution of shortest cycle lengths in random networks

    NASA Astrophysics Data System (ADS)

    Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan

    2017-12-01

    We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.

  15. Choice of rating scale labels: implication for minimizing patient satisfaction response ceiling effect in telemedicine surveys.

    PubMed

    Masino, Caterina; Lam, Tony C M

    2014-12-01

    Lack of response variability is problematic in surveys because of its detrimental effects on sensitivity and consequently reliability of the responses. In satisfaction surveys, this problem is caused by the ceiling effect resulting from high satisfaction ratings. A potential solution strategy is to manipulate the labels of the rating scale to create greater discrimination of responses on the high end of the response continuum. This study examined the effects of a positive-centered scale on the distribution and reliability of telemedicine satisfaction responses in a highly positive respondent population. In total, 216 telemedicine participants were randomly assigned to one of three experimental conditions as defined by the form of Likert scale: (1) 5-point Balanced Equal-Interval, (2) 5-point Positive-Packed, and (3) 5-point Positive-Centered Equal-Interval. Although the study findings were not statistically significant, partially because of sample size, the distribution and internal consistency reliability of responses occurred in the direction hypothesized. Loading the rating scale with more positive labels appears to be a useful strategy for reducing the ceiling effect and increases the discrimination ability of survey responses. The current research provides a survey design strategy to minimize ceiling effects. Although the findings provide some evidence suggesting the benefit of using rating scales loaded with positive labels, more research is needed to confirm this, as well as extend it to examine other types of rating scales and the interaction between rating scale formats and respondent characteristics.

  16. Participatory Risk Mapping of Malaria Vector Exposure in Northern South America using Environmental and Population Data

    PubMed Central

    Fuller, D.O.; Troyo, A.; Alimi, T.O.; Beier, J.C.

    2014-01-01

    Malaria elimination remains a major public health challenge in many tropical regions, including large areas of northern South America. In this study, we present a new high spatial resolution (90 × 90 m) risk map for Colombia and surrounding areas based on environmental and human population data. The map was created through a participatory multi-criteria decision analysis in which expert opinion was solicited to determine key environmental and population risk factors, different fuzzy functions to standardize risk factor inputs, and variable factor weights to combine risk factors in a geographic information system. The new risk map was compared to a map of malaria cases in which cases were aggregated to the municipio (municipality) level. The relationship between mean municipio risk scores and total cases by muncípio showed a weak correlation. However, the relationship between pixel-level risk scores and vector occurrence points for two dominant vector species, Anopheles albimanus and An. darlingi, was significantly different (p < 0.05) from a random point distribution, as was a pooled point distribution for these two vector species and An. nuneztovari. Thus, we conclude that the new risk map derived based on expert opinion provides an accurate spatial representation of risk of potential vector exposure rather than malaria transmission as shown by the pattern of malaria cases, and therefore it may be used to inform public health authorities as to where vector control measures should be prioritized to limit human-vector contact in future malaria outbreaks. PMID:24976656

  17. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.

  18. Statistical dynamics of regional populations and economies

    NASA Astrophysics Data System (ADS)

    Huo, Jie; Wang, Xu-Ming; Hao, Rui; Wang, Peng

    Quantitative analysis of human behavior and social development is becoming a hot spot of some interdisciplinary studies. A statistical analysis on the population and GDP of 150 cities in China from 1990 to 2013 is conducted. The result indicates the cumulative probability distribution of the populations and that of the GDPs obeying the shifted power law, respectively. In order to understand these characteristics, a generalized Langevin equation describing variation of population is proposed, which is based on the correlations between population and GDP as well as the random fluctuations of the related factors. The equation is transformed into the Fokker-Plank equation to express the evolution of population distribution. The general solution demonstrates a transition of the distribution from the normal Gaussian distribution to a shifted power law, which suggests a critical point of time at which the transition takes place. The shifted power law distribution in the supercritical situation is qualitatively in accordance with the practical result. The distribution of the GDPs is derived from the well-known Cobb-Douglas production function. The result presents a change, in supercritical situation, from a shifted power law to the Gaussian distribution. This is a surprising result-the regional GDP distribution of our world will be the Gaussian distribution one day in the future. The discussions based on the changing trend of economic growth suggest it will be true. Therefore, these theoretical attempts may draw a historical picture of our society in the aspects of population and economy.

  19. Can Targeted Intervention Mitigate Early Emotional and Behavioral Problems?: Generating Robust Evidence within Randomized Controlled Trials

    PubMed Central

    Doyle, Orla; McGlanaghy, Edel; O’Farrelly, Christine; Tremblay, Richard E.

    2016-01-01

    This study examined the impact of a targeted Irish early intervention program on children’s emotional and behavioral development using multiple methods to test the robustness of the results. Data on 164 Preparing for Life participants who were randomly assigned into an intervention group, involving home visits from pregnancy onwards, or a control group, was used to test the impact of the intervention on Child Behavior Checklist scores at 24-months. Using inverse probability weighting to account for differential attrition, permutation testing to address small sample size, and quantile regression to characterize the distributional impact of the intervention, we found that the few treatment effects were largely concentrated among boys most at risk of developing emotional and behavioral problems. The average treatment effect identified a 13% reduction in the likelihood of falling into the borderline clinical threshold for Total Problems. The interaction and subgroup analysis found that this main effect was driven by boys. The distributional analysis identified a 10-point reduction in the Externalizing Problems score for boys at the 90th percentile. No effects were observed for girls or for the continuous measures of Total, Internalizing, and Externalizing problems. These findings suggest that the impact of this prenatally commencing home visiting program may be limited to boys experiencing the most difficulties. Further adoption of the statistical methods applied here may help to improve the internal validity of randomized controlled trials and contribute to the field of evaluation science more generally. Trial Registration: ISRCTN Registry ISRCTN04631728 PMID:27253184

  20. Investigating the generation of Love waves in secondary microseisms using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Wenk, Stefan; Hadziioannou, Celine; Pelties, Christian; Igel, Heiner

    2014-05-01

    Longuet-Higgins (1950) proposed that secondary microseismic noise can be attributed to oceanic disturbances by surface gravity wave interference causing non-linear, second-order pressure perturbations at the ocean bottom. As a first approximation, this source mechanism can be considered as a force acting normal to the ocean bottom. In an isotropic, layered, elastic Earth model with plain interfaces, vertical forces generate P-SV motions in the vertical plane of source and receiver. In turn, only Rayleigh waves are excited at the free surface. However, several authors report on significant Love wave contributions in the secondary microseismic frequency band of real data measurements. The reason is still insufficiently analysed and several hypothesis are under debate: - The source mechanism has strongest influence on the excitation of shear motions, whereas the source direction dominates the effect of Love wave generation in case of point force sources. Darbyshire and Okeke (1969) proposed the topographic coupling effect of pressure loads acting on a sloping sea-floor to generate the shear tractions required for Love wave excitation. - Rayleigh waves can be converted into Love waves by scattering. Therefore, geometric scattering at topographic features or internal scattering by heterogeneous material distributions can cause Love wave generation. - Oceanic disturbances act on large regions of the ocean bottom, and extended sources have to be considered. In combination with topographic coupling and internal scattering, the extent of the source region and the timing of an extended source should effect Love wave excitation. We try to elaborate the contribution of different source mechanisms and scattering effects on Love to Rayleigh wave energy ratios by 3D numerical simulations. In particular, we estimate the amount of Love wave energy generated by point and extended sources acting on the free surface. Simulated point forces are modified in their incident angle, whereas extended sources are adapted in their spatial extent, magnitude and timing. Further, the effect of variations in the correlation length and perturbation magnitude of a random free surface topography as well as an internal random material distribution are studied.

  1. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  2. Exploratory study and application of the angular wavelet analysis for assessing the spatial distribution of breakdown spots in Pt/HfO2/Pt structures

    NASA Astrophysics Data System (ADS)

    Muñoz-Gorriz, J.; Monaghan, S.; Cherkaoui, K.; Suñé, J.; Hurley, P. K.; Miranda, E.

    2017-12-01

    The angular wavelet analysis is applied for assessing the spatial distribution of breakdown spots in Pt/HfO2/Pt capacitors with areas ranging from 104 to 105 μm2. The breakdown spot lateral sizes are in the range from 1 to 3 μm, and they appear distributed on the top metal electrode as a point pattern. The spots are generated by ramped and constant voltage stresses and are the consequence of microexplosions caused by the formation of shorts spanning the dielectric film. This kind of pattern was analyzed in the past using the conventional spatial analysis tools such as intensity plots, distance histograms, pair correlation function, and nearest neighbours. Here, we show that the wavelet analysis offers an alternative and complementary method for testing whether or not the failure site distribution departs from a complete spatial randomness process in the angular domain. The effect of using different wavelet functions, such as the Haar, Sine, French top hat, Mexican hat, and Morlet, as well as the roles played by the process intensity, the location of the voltage probe, and the aspect ratio of the device, are all discussed.

  3. Bell Test over Extremely High-Loss Channels: Towards Distributing Entangled Photon Pairs between Earth and the Moon

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; Li, Yu-Huai; Zou, Wen-Jie; Li, Zheng-Ping; Shen, Qi; Liao, Sheng-Kai; Ren, Ji-Gang; Yin, Juan; Chen, Yu-Ao; Peng, Cheng-Zhi; Pan, Jian-Wei

    2018-04-01

    Quantum entanglement was termed "spooky action at a distance" in the well-known paper by Einstein, Podolsky, and Rosen. Entanglement is expected to be distributed over longer and longer distances in both practical applications and fundamental research into the principles of nature. Here, we present a proposal for distributing entangled photon pairs between Earth and the Moon using a Lagrangian point at a distance of 1.28 light seconds. One of the most fascinating features in this long-distance distribution of entanglement is as follows. One can perform the Bell test with human supplying the random measurement settings and recording the results while still maintaining spacelike intervals. To realize a proof-of-principle experiment, we develop an entangled photon source with 1 GHz generation rate, about 2 orders of magnitude higher than previous results. Violation of Bell's inequality was observed under a total simulated loss of 103 dB with measurement settings chosen by two experimenters. This demonstrates the feasibility of such long-distance Bell test over extremely high-loss channels, paving the way for one of the ultimate tests of the foundations of quantum mechanics.

  4. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  5. Optimized multiple quantum MAS lineshape simulations in solid state NMR

    NASA Astrophysics Data System (ADS)

    Brouwer, William J.; Davis, Michael C.; Mueller, Karl T.

    2009-10-01

    The majority of nuclei available for study in solid state Nuclear Magnetic Resonance have half-integer spin I>1/2, with corresponding electric quadrupole moment. As such, they may couple with a surrounding electric field gradient. This effect introduces anisotropic line broadening to spectra, arising from distinct chemical species within polycrystalline solids. In Multiple Quantum Magic Angle Spinning (MQMAS) experiments, a second frequency dimension is created, devoid of quadrupolar anisotropy. As a result, the center of gravity of peaks in the high resolution dimension is a function of isotropic second order quadrupole and chemical shift alone. However, for complex materials, these parameters take on a stochastic nature due in turn to structural and chemical disorder. Lineshapes may still overlap in the isotropic dimension, complicating the task of assignment and interpretation. A distributed computational approach is presented here which permits simulation of the two-dimensional MQMAS spectrum, generated by random variates from model distributions of isotropic chemical and quadrupole shifts. Owing to the non-convex nature of the residual sum of squares (RSS) function between experimental and simulated spectra, simulated annealing is used to optimize the simulation parameters. In this manner, local chemical environments for disordered materials may be characterized, and via a re-sampling approach, error estimates for parameters produced. Program summaryProgram title: mqmasOPT Catalogue identifier: AEEC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3650 No. of bytes in distributed program, including test data, etc.: 73 853 Distribution format: tar.gz Programming language: C, OCTAVE Computer: UNIX/Linux Operating system: UNIX/Linux Has the code been vectorised or parallelized?: Yes RAM: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 3.5M, SMP AMD opteron Classification: 2.3 External routines: OCTAVE ( http://www.gnu.org/software/octave/), GNU Scientific Library ( http://www.gnu.org/software/gsl/), OPENMP ( http://openmp.org/wp/) Nature of problem: The optimal simulation and modeling of multiple quantum magic angle spinning NMR spectra, for general systems, especially those with mild to significant disorder. The approach outlined and implemented in C and OCTAVE also produces model parameter error estimates. Solution method: A model for each distinct chemical site is first proposed, for the individual contribution of crystallite orientations to the spectrum. This model is averaged over all powder angles [1], as well as the (stochastic) parameters; isotropic chemical shift and quadrupole coupling constant. The latter is accomplished via sampling from a bi-variate Gaussian distribution, using the Box-Muller algorithm to transform Sobol (quasi) random numbers [2]. A simulated annealing optimization is performed, and finally the non-linear jackknife [3] is applied in developing model parameter error estimates. Additional comments: The distribution contains a script, mqmasOpt.m, which runs in the OCTAVE language workspace. Running time: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 58.35 seconds, SMP AMD opteron. References:S.K. Zaremba, Annali di Matematica Pura ed Applicata 73 (1966) 293. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM, 1992. T. Fox, D. Hinkley, K. Larntz, Technometrics 22 (1980) 29.

  6. Canine retraction and anchorage loss: self-ligating versus conventional brackets in a randomized split-mouth study.

    PubMed

    da Costa Monini, André; Júnior, Luiz Gonzaga Gandini; Martins, Renato Parsekian; Vianna, Alexandre Protásio

    2014-09-01

    To evaluate the velocity of canine retraction, anchorage loss and changes on canine and first molar inclinations using self-ligating and conventional brackets. Twenty-five adults with Class I malocclusion and a treatment plan involving extractions of four first premolars were selected for this randomized split-mouth control trial. Patients had either conventional or self-ligating brackets bonded to maxillary canines randomly. Retraction was accomplished using 100-g nickel-titanium closed coil springs, which were reactivated every 4 weeks. Oblique radiographs were taken before and after canine retraction was completed, and the cephalograms were superimposed on stable structures of the maxilla. Cephalometric points were digitized twice by a blinded operator for error control, and the following landmarks were collected: canine cusp and apex horizontal changes, molar cusp and apex horizontal changes, and angulation changes in canines and molars. The blinded data, which were normally distributed, were analyzed through paired t-tests for group differences. No differences were found between the two groups for all variables tested. Both brackets showed the same velocity of canine retraction and loss of anteroposterior anchorage of the molars. No changes were found between brackets regarding the inclination of canines and first molars.

  7. Coulomb Mechanics And Landscape Geometry Explain Landslide Size Distribution

    NASA Astrophysics Data System (ADS)

    Jeandet, L.; Steer, P.; Lague, D.; Davy, P.

    2017-12-01

    It is generally observed that the dimensions of large bedrock landslides follow power-law scaling relationships. In particular, the non-cumulative frequency distribution (PDF) of bedrock landslide area is well characterized by a negative power-law above a critical size, with an exponent 2.4. However, the respective role of bedrock mechanical properties, landscape shape and triggering mechanisms on the scaling properties of landslide dimensions are still poorly understood. Yet, unravelling the factors that control this distribution is required to better estimate the total volume of landslides triggered by large earthquakes or storms. To tackle this issue, we develop a simple probabilistic 1D approach to compute the PDF of rupture depths in a given landscape. The model is applied to randomly sampled points along hillslopes of studied digital elevation models. At each point location, the model determines the range of depth and angle leading to unstable rupture planes, by applying a simple Mohr-Coulomb rupture criterion only to the rupture planes that intersect downhill surface topography. This model therefore accounts for both rock mechanical properties, friction and cohesion, and landscape shape. We show that this model leads to realistic landslide depth distribution, with a power-law arising when the number of samples is high enough. The modeled PDF of landslide size obtained for several landscapes match the ones from earthquakes-driven landslides catalogues for the same landscape. In turn, this allows us to invert landslide effective mechanical parameters, friction and cohesion, associated to those specific events, including Chi-Chi, Wenchuan, Niigata and Gorkha earthquakes. The cohesion and friction ranges (25-35 degrees and 5-20 kPa) are in good agreement with previously inverted values. Our results demonstrate that reduced complexity mechanics is efficient to model the distribution of unstable depths, and show the role of landscape variability in landslide size distribution.

  8. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  9. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  10. A spatial scaling relationship for soil moisture in a semiarid landscape, using spatial scaling relationships for pedology

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Chen, M.; Cohen, S.; Saco, P. M.; Hancock, G. R.

    2013-12-01

    In humid areas it is generally considered that soil moisture scales spatially according to the wetness index of the landscape. This scaling arises from lateral flow downslope of ground water within the soil zone. However, in semi-arid and drier regions, this lateral flow is small and fluxes are dominated by vertical flows driven by infiltration and evapotranspiration. Thus, in the absence of runon processes, soil moisture at a location is more driven by local factors such as soil and vegetation properties at that location rather than upstream processes draining to that point. The 'apparent' spatial randomness of soil and vegetation properties generally suggests that soil moisture for semi-arid regions is spatially random. In this presentation a new analysis of neutron probe data during summer from the Tarrawarra site near Melbourne, Australia shows persistent spatial organisation of soil moisture over several years. This suggests a link between permanent features of the catchment (e.g. soil properties) and soil moisture distribution, even though the spatial pattern of soil moisture during the 4 summers monitored appears spatially random. This and other data establishes a prima facie case that soil variations drive spatial variation in soil moisture. Accordingly, we used a previously published spatial scaling relationship for soil properties derived using the mARM pedogenesis model to simulate the spatial variation of soil grading. This soil grading distribution was used in the Rosetta pedotransfer model to derive a spatial distribution of soil functional properties (e.g. saturated hydraulic conductivity, porosity). These functional properties were then input into the HYDRUS-1D soil moisture model and soil moisture simulated for 3 years at daily resolution. The HYDRUS model used had previously been calibrated to field observed soil moisture data at our SASMAS field site. The scaling behaviour of soil moisture derived from this modelling will be discussed and compared with observed data from our SASMAS field sites.

  11. Shapiro effect as a possible cause of the low-frequency pulsar timing noise in globular clusters

    NASA Astrophysics Data System (ADS)

    Larchenkova, T. I.; Kopeikin, S. M.

    2006-01-01

    A prolonged timing of millisecond pulsars has revealed low-frequency uncorrelated (infrared) noise, presumably of astrophysical origin, in the pulse arrival time (PAT) residuals for some of them. Currently available pulsar timing methods allow the statistical parameters of this noise to be reliably measured by decomposing the PAT residual function into orthogonal Fourier harmonics. In most cases, pulsars in globular clusters show a low-frequency modulation of their rotational phase and spin rate. The relativistic time delay of the pulsar signal in the curved spacetime of randomly distributed and moving globular cluster stars (the Shapiro effect) is suggested as a possible cause of this modulation. Extremely important (from an astrophysical point of view) information about the structure of the globular cluster core, which is inaccessible to study by other observational methods, could be obtained by analyzing the spectral parameters of the low-frequency noise caused by the Shapiro effect and attributable to the random passages of stars near the line of sight to the pulsar. Given the smallness of the aberration corrections that arise from the nonstationarity of the gravitational field of the randomly distributed ensemble of stars under consideration, a formula is derived for the Shapiro effect for a pulsar in a globular cluster. The derived formula is used to calculate the autocorrelation function of the low-frequency pulsar noise, the slope of its power spectrum, and the behavior of the σz statistic that characterizes the spectral properties of this noise in the form of a time function. The Shapiro effect under discussion is shown to manifest itself for large impact parameters as a low-frequency noise of the pulsar spin rate with a spectral index of n = -1.8 that depends weakly on the specific model distribution of stars in the globular cluster. For small impact parameters, the spectral index of the noise is n = -1.5.

  12. Application of smart BFRP bars with distributed fiber optic sensors into concrete structures

    NASA Astrophysics Data System (ADS)

    Tang, Yongsheng; Wu, Zhishen; Yang, Caiqian; Wu, Gang; Zhao, Lihua; Song, Shiwei

    2010-04-01

    In this paper, the self-sensing and mechanical properties of concrete structures strengthened with a novel type of smart basalt fiber reinforced polymer (BFRP) bars were experimentally studied, wherein the sensing element is Brillouin scattering-based distributed optical fiber sensing technique. First, one of the smart bars was applied to strengthen a 2m concrete beam under a 4-points static loading manner in the laboratory. During the experiment, the bar can measure the inner strain changes and monitor the randomly distributed cracks well. With the distributed strain information along the bar, the distributed deformation of the beam can be calculated, and the structural health can be monitored and evaluated as well. Then, two smart bars with a length of about 70m were embedded into a concrete airfield pavement reinforced by long BFRP bars. In the field test, all the optical fiber sensors in the smart bars survived the whole concrete casting process and worked well. From the measured data, the concrete cracks along the pavement length can be easily monitored. The experimental results also confirmed that the bars can strengthen the structures especially after the yielding of steel bars. All the results confirm that this new type of smart BFRP bars show not only good sensing performance but also mechanical performance in the concrete structures.

  13. Random walks with random velocities.

    PubMed

    Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger

    2008-07-01

    We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.

  14. Apker Award Recipient: Renormalization-Group Study of Helium Mixtures Immersed in a Porous Medium

    NASA Astrophysics Data System (ADS)

    Lopatnikova, Anna

    1998-03-01

    Superfluidity and phase separation in ^3He-^4He mixtures immersed in aerogel are studied by renormalization-group theory. Firstly, the theory is applied to jungle-gym (non-random) aerogel.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 55, 3798 (1997).) This calculation is conducted via the coupled renormalization-group mappings of interactions near and away from aerogel. Superfluidity at very low ^4He concentrations and a depressed tricritical temperature are found at the onset of superfludity. A superfluid-superfluid phase separation, terminating at an isolated critical point, is found entirely within the superfluid phase. Secondly, the theory is applied to true aerogel, which has quenched disorder at both atomic and geometric levels.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 56, 11865 (1997).) This calculation is conducted via the coupled renormalization-group mappings, near and away from aerogel, of quenched probability distributions of random interactions. Random-bond effects on superfluidity onset and random-field effects on superfluid phase separation are seen. The quenched randomness causes the λ line of second-order phase transitions of superfluidity onset to reach zero temperature, in agreement with general prediction and experiments. Based on these studies, the experimentally observed(S.B. Kim, J. Ma, and M.H.W. Chan, Phys. Rev. Lett. 71, 2268 (1993); N. Mulders and M.H.W. Chan, Phys. Rev. Lett. 75, 3705 (1995).) distinctive characteristics of ^3He-^4He mixtures in aerogel are related to the aerogel properties of connectivity, tenuousness, and atomic and geometric randomness.

  15. Modeling species-abundance relationships in multi-species collections

    USGS Publications Warehouse

    Peng, S.; Yin, Z.; Ren, H.; Guo, Q.

    2003-01-01

    Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.

  16. Does prism width from the shell prismatic layer have a random distribution?

    NASA Astrophysics Data System (ADS)

    Vancolen, Séverine; Verrecchia, Eric

    2008-10-01

    A study of the distribution of the prism width inside the prismatic layer of Unio tumidus (Philipsson 1788, Diss Hist-Nat, Berling, Lundæ) from Lake Neuchâtel, Switzerland, has been conducted in order to determine whether or not this distribution is random. Measurements of 954 to 1,343 prism widths (depending on shell sample) have been made using a scanning electron microscope in backscattered electron mode. A white noise test has been applied to the distribution of prism sizes (i.e. width). It shows that there is no temporal cycle that could potentially influence their formation and growth. These results suggest that prism widths are randomly distributed, and related neither to external rings nor to environmental constraints.

  17. Super-resolving random-Gaussian apodized photon sieve.

    PubMed

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  18. Spatial Distribution of Phase Singularities in Optical Random Vector Waves.

    PubMed

    De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L

    2016-08-26

    Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.

  19. Random distributed feedback fiber laser at 2.1  μm.

    PubMed

    Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin

    2016-11-01

    We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.

  20. The Role of Landscape in the Distribution of Deer-Vehicle Collisions in South Mississippi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKee, Jacob J; Cochran, David

    2012-01-01

    Deer-vehicle collisions (DVCs) have a negative impact on the economy, traffic safety, and the general well-being of otherwise healthy deer. To mitigate DVCs, it is imperative to gain a better understanding of factors that play a role in their spatial distribution. Much of the existing research on DVCs in the United States has been inconclusive, pointing to a variety of causal factors that seem more specific to study site and region than indicative of broad patterns. Little DVC research has been conducted in the southern United States, making the region particularly important with regard to this issue. In this study,more » we evaluate landscape factors that contributed to the distribution of 347 DVCs that occurred in Forrest and Lamar Counties of south Mississippi, from 2006 to 2009. Using nearest-neighbor and discriminant analysis, we demonstrate that DVCs in south Mississippi are not random spatial phenomena. We also develop a classification model that identified seven landscape metrics, explained 100% of the variance, and could distinguish DVCs from control sites with an accuracy of 81.3 percent.« less

  1. Nonuniversality of density and disorder in jammed sphere packings

    NASA Astrophysics Data System (ADS)

    Jiao, Yang; Stillinger, Frank H.; Torquato, Salvatore

    2011-01-01

    We show for the first time that collectively jammed disordered packings of three-dimensional monodisperse frictionless hard spheres can be produced and tuned using a novel numerical protocol with packing density ϕ as low as 0.6. This is well below the value of 0.64 associated with the maximally random jammed state and entirely unrelated to the ill-defined "random loose packing" state density. Specifically, collectively jammed packings are generated with a very narrow distribution centered at any density ϕ over a wide density range ϕ ɛ(0.6,0.740 48…) with variable disorder. Our results support the view that there is no universal jamming point that is distinguishable based on the packing density and frequency of occurrence. Our jammed packings are mapped onto a density-order-metric plane, which provides a broader characterization of packings than density alone. Other packing characteristics, such as the pair correlation function, average contact number, and fraction of rattlers are quantified and discussed.

  2. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  3. Perturbed-input-data ensemble modeling of magnetospheric dynamics

    NASA Astrophysics Data System (ADS)

    Morley, S.; Steinberg, J. T.; Haiducek, J. D.; Welling, D. T.; Hassan, E.; Weaver, B. P.

    2017-12-01

    Many models of Earth's magnetospheric dynamics - including global magnetohydrodynamic models, reduced complexity models of substorms and empirical models - are driven by solar wind parameters. To provide consistent coverage of the upstream solar wind these measurements are generally taken near the first Lagrangian point (L1) and algorithmically propagated to the nose of Earth's bow shock. However, the plasma and magnetic field measured near L1 is a point measurement of an inhomogeneous medium, so the individual measurement may not be sufficiently representative of the broader region near L1. The measured plasma may not actually interact with the Earth, and the solar wind structure may evolve between L1 and the bow shock. To quantify uncertainties in simulations, as well as to provide probabilistic forecasts, it is desirable to use perturbed input ensembles of magnetospheric and space weather forecasting models. By using concurrent measurements of the solar wind near L1 and near the Earth, we construct a statistical model of the distributions of solar wind parameters conditioned on their upstream value. So that we can draw random variates from our model we specify the conditional probability distributions using Kernel Density Estimation. We demonstrate the utility of this approach using ensemble runs of selected models that can be used for space weather prediction.

  4. Statistical representation of multiphase flow

    NASA Astrophysics Data System (ADS)

    Subramaniam

    2000-11-01

    The relationship between two common statistical representations of multiphase flow, namely, the single--point Eulerian statistical representation of two--phase flow (D. A. Drew, Ann. Rev. Fluid Mech. (15), 1983), and the Lagrangian statistical representation of a spray using the dropet distribution function (F. A. Williams, Phys. Fluids 1 (6), 1958) is established for spherical dispersed--phase elements. This relationship is based on recent work which relates the droplet distribution function to single--droplet pdfs starting from a Liouville description of a spray (Subramaniam, Phys. Fluids 10 (12), 2000). The Eulerian representation, which is based on a random--field model of the flow, is shown to contain different statistical information from the Lagrangian representation, which is based on a point--process model. The two descriptions are shown to be simply related for spherical, monodisperse elements in statistically homogeneous two--phase flow, whereas such a simple relationship is precluded by the inclusion of polydispersity and statistical inhomogeneity. The common origin of these two representations is traced to a more fundamental statistical representation of a multiphase flow, whose concepts derive from a theory for dense sprays recently proposed by Edwards (Atomization and Sprays 10 (3--5), 2000). The issue of what constitutes a minimally complete statistical representation of a multiphase flow is resolved.

  5. Alchemical and structural distribution based representation for universal quantum machine learning

    NASA Astrophysics Data System (ADS)

    Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole

    2018-06-01

    We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.

  6. Power in randomized group comparisons: the value of adding a single intermediate time point to a traditional pretest-posttest design.

    PubMed

    Venter, Anre; Maxwell, Scott E; Bolig, Erika

    2002-06-01

    Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.

  7. Fine-scale spatial distribution of the common lugworm Arenicola marina, and effects of intertidal clam fishing

    NASA Astrophysics Data System (ADS)

    Boldina, Inna; Beninger, Peter G.

    2014-04-01

    Despite its ubiquity and its role as an ecosystem engineer on temperate intertidal mudflats, little is known of the spatial ecology of the lugworm Arenicola marina. We estimated lugworm densities and analyzed the spatial distribution of A. marina on a French Atlantic mudflat subjected to long-term clam digging activities, and compared these to a nearby pristine reference mudflat, using a combination of geostatistical techniques: point-pattern analysis, autocorrelation, and wavelet analysis. Lugworm densities were an order of magnitude greater at the reference site. Although A. marina showed an aggregative spatial distribution at both sites, the characteristics and intensity of aggregation differed markedly between sites. The reference site showed an inhibition process (regular distribution) at distances <7.5 cm, whereas the impacted site showed a random distribution at this scale. At distances from 15 cm to several tens of meters, the spatial distribution of A. marina was clearly aggregated at both sites; however, the autocorrelation strength was much weaker at the impacted site. In addition, the non-impacted site presented multi-scale spatial distribution, which was not evident at the impacted site. The differences observed between the spatial distributions of the fishing-impacted vs. the non-impacted site reflect similar findings for other components of these two mudflat ecosystems, suggesting common community-level responses to prolonged mechanical perturbation: a decrease in naturally-occurring aggregation. This change may have consequences for basic biological characteristics such as reproduction, recruitment, growth, and feeding.

  8. Distribution of 'Candidatus Liberibacter asiaticus' Above and Below Ground in Texas Citrus.

    PubMed

    Louzada, Eliezer S; Vazquez, Omar Ed; Braswell, W Evan; Yanev, George; Devanaboina, Madhavi; Kunta, Madhurababu

    2016-07-01

    Detection of 'Candidatus Liberibacter asiaticus' represents one of the most difficult, yet critical, steps of controlling Huanglongbing disease. Efficient detection relies on understanding the underlying distribution of bacteria within trees. To that end, we studied the distribution of 'Ca. L. asiaticus' in leaves of 'Rio Red' grapefruit trees and in roots of 'Valencia' sweet orange trees grafted onto sour orange rootstock. We performed two sets of leaf collection on grapefruit trees; the first a selective sampling targeting symptomatic leaves and their neighbors and the second a systematic collection disregarding symptomology. From uprooted orange trees, we exhaustively sampled fibrous roots. In this study, the presence of 'Ca. L. asiaticus' was detected in leaves using real-time polymerase chain reaction (PCR) targeting the 16S ribosomal gene and in roots using the rpIJ/rpIL ribosomal protein genes and was confirmed with conventional PCR and sequencing of the rpIJ/rpIL gene in both tissues. Among randomly collected leaves, 'Ca. L. asiaticus' was distributed in a patchy fashion. Detection of 'Ca. L. asiaticus' varied with leaf symptomology with symptomatic leaves showing the highest frequency (74%) followed by their neighboring asymptomatic leaves (30%), while randomly distributed asymptomatic leaves had the lowest frequency (20%). Among symptomatic leaves, we found statistically significant differences in mean number of bacterial cells with respect to both increasing distance of the leaf from the trunk and cardinal direction. The titer of 'Ca. L. asiaticus' cells was significantly greater on the north side of trees than on the south and west sides. Moreover, these directions showed different spatial distributions of 'Ca. L. asiaticus' with higher titers near the trunk on the south and west sides as opposed to further from the trunk on the north side. Similarly, we found spatial variation in 'Ca. L. asiaticus' distribution among root samples. 'Ca. L. asiaticus' was detected more frequently and bacterial abundances were higher among horizontally growing roots just under the soil surface (96%) than among deeper vertically growing roots (78%). Bacterial abundance declined slightly with distance from the trunk. These results point to paths of research that will likely prove useful to combating this devastating disease.

  9. ReSTART: A Novel Framework for Resource-Based Triage in Mass-Casualty Events.

    PubMed

    Mills, Alex F; Argon, Nilay T; Ziya, Serhan; Hiestand, Brian; Winslow, James

    2014-01-01

    Current guidelines for mass-casualty triage do not explicitly use information about resource availability. Even though this limitation has been widely recognized, how it should be addressed remains largely unexplored. The authors present a novel framework developed using operations research methods to account for resource limitations when determining priorities for transportation of critically injured patients. To illustrate how this framework can be used, they also develop two specific example methods, named ReSTART and Simple-ReSTART, both of which extend the widely adopted triage protocol Simple Triage and Rapid Treatment (START) by using a simple calculation to determine priorities based on the relative scarcity of transportation resources. The framework is supported by three techniques from operations research: mathematical analysis, optimization, and discrete-event simulation. The authors? algorithms were developed using mathematical analysis and optimization and then extensively tested using 9,000 discrete-event simulations on three distributions of patient severity (representing low, random, and high acuity). For each incident, the expected number of survivors was calculated under START, ReSTART, and Simple-ReSTART. A web-based decision support tool was constructed to help providers make prioritization decisions in the aftermath of mass-casualty incidents based on ReSTART. In simulations, ReSTART resulted in significantly lower mortality than START regardless of which severity distribution was used (paired t test, p<.01). Mean decrease in critical mortality, the percentage of immediate and delayed patients who die, was 8.5% for low-acuity distribution (range ?2.2% to 21.1%), 9.3% for random distribution (range ?0.2% to 21.2%), and 9.1% for high-acuity distribution (range ?0.7% to 21.1%). Although the critical mortality improvement due to ReSTART was different for each of the three severity distributions, the variation was less than 1 percentage point, indicating that the ReSTART policy is relatively robust to different severity distributions. Taking resource limitations into account in mass-casualty situations, triage has the potential to increase the expected number of survivors. Further validation is required before field implementation; however, the framework proposed in here can serve as the foundation for future work in this area. 2014.

  10. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    2016-12-01

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  11. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  12. Characterizing pixel and point patterns with a hyperuniformity disorder length

    NASA Astrophysics Data System (ADS)

    Chieco, A. T.; Dreyfus, R.; Durian, D. J.

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns—where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h =L /2 . Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h =(L /2 )(f /d ) for small f , and h =L /2 for f →1 . And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L ,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h (L ) versus L . We call this approach "hyperuniformity disorder length spectroscopy".

  13. Characterizing pixel and point patterns with a hyperuniformity disorder length.

    PubMed

    Chieco, A T; Dreyfus, R; Durian, D J

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns-where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h=L/2. Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h=(L/2)(f/d) for small f, and h=L/2 for f→1. And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h(L) versus L. We call this approach "hyperuniformity disorder length spectroscopy".

  14. The effect of dissipative inhomogeneous medium on the statistics of the wave intensity

    NASA Technical Reports Server (NTRS)

    Saatchi, Sasan S.

    1993-01-01

    One of the main theoretical points in the theory of wave propagation in random medium is the derivation of closed form equations to describe the statistics of the propagating waves. In particular, in one dimensional problems, the closed form representation of the multiple scattering effects is important since it contributes in understanding such problems like wave localization, backscattering enhancement, and intensity fluctuations. In this the propagation of plane waves in a layer of one-dimensional dissipative random medium is considered. The medium is modeled by a complex permittivity whose real part is a constant representing the absorption. The one dimensional problem is mathematically equivalent to the analysis of a transmission line with randomly perturbed distributed parameters and a single mode lossy waveguide and the results can be used to study the propagation of radio waves through atmosphere and the remote sensing of geophysical media. It is assumed the scattering medium consists of an ensemble of one-dimensional point scatterers randomly positioned in a layer of thickness L with diffuse boundaries. A Poisson impulse process with density lambda is used to model the position of scatterers in the medium. By employing the Markov properties of this process an exact closed form equation of Kolmogorov-Feller type was obtained for the probability density of the reflection coefficient. This equation was solved by combining two limiting cases: (1) when the density of scatterers is small; and (2) when the medium is weakly dissipative. A two variable perturbation method for small lambda was used to obtain solutions valid for thick layers. These solutions are then asymptotically evaluated for small dissipation. To show the effect of dissipation, the mean and fluctuations of the reflected power are obtained. The results were compared with a lossy homogeneous medium and with a lossless inhomogeneous medium and the regions where the effect of absorption is not essential were discussed.

  15. Topology for Dominance for Network of Multi-Agent System

    NASA Astrophysics Data System (ADS)

    Szeto, K. Y.

    2007-05-01

    The resource allocation problem in evolving two-dimensional point patterns is investigated for the existence of good strategies for the construction of initial configuration that leads to fast dominance of the pattern by one single species, which can be interpreted as market dominance by a company in the context of multi-agent systems in econophysics. For hexagonal lattice, certain special topological arrangements of the resource in two-dimensions, such as rings, lines and clusters have higher probability of dominance, compared to random pattern. For more complex networks, a systematic way to search for a stable and dominant strategy of resource allocation in the changing environment is found by means of genetic algorithm. Five typical features can be summarized by means of the distribution function for the local neighborhood of friends and enemies as well as the local clustering coefficients: (1) The winner has more triangles than the loser has. (2) The winner likes to form clusters as the winner tends to connect with other winner rather than with losers; while the loser tends to connect with winners rather than losers. (3) The distribution function of friends as well as enemies for the winner is broader than the corresponding distribution function for the loser. (4) The connectivity at which the peak of the distribution of friends for the winner occurs is larger than that of the loser; while the peak values for friends for winners is lower. (5) The connectivity at which the peak of the distribution of enemies for the winner occurs is smaller than that of the loser; while the peak values for enemies for winners is lower. These five features appear to be general, at least in the context of two-dimensional hexagonal lattices of various sizes, hierarchical lattice, Voronoi diagrams, as well as high-dimensional random networks. These general local topological properties of networks are relevant to strategists aiming at dominance in evolving patterns when the interaction between the agents is local.

  16. Continuous Time Random Walks with memory and financial distributions

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Masoliver, Jaume

    2017-11-01

    We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.

  17. Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae

    NASA Astrophysics Data System (ADS)

    Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.

    2014-07-01

    We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.

  18. Radial alignment of elliptical galaxies by the tidal force of a cluster of galaxies

    NASA Astrophysics Data System (ADS)

    Rong, Yu; Yi, Shu-Xu; Zhang, Shuang-Nan; Tu, Hong

    2015-08-01

    Unlike the random radial orientation distribution of field elliptical galaxies, galaxies in a cluster are expected to point preferentially towards the centre of the cluster, as a result of the cluster's tidal force on its member galaxies. In this work, an analytic model is formulated to simulate this effect. The deformation time-scale of a galaxy in a cluster is usually much shorter than the time-scale of change of the tidal force; the dynamical process of tidal interaction within the galaxy can thus be ignored. The equilibrium shape of a galaxy is then assumed to be the surface of equipotential that is the sum of the self-gravitational potential of the galaxy and the tidal potential of the cluster at this location. We use a Monte Carlo method to calculate the radial orientation distribution of cluster galaxies, by assuming a Navarro-Frenk-White mass profile for the cluster and the initial ellipticity of field galaxies. The radial angles show a single-peak distribution centred at zero. The Monte Carlo simulations also show that a shift of the reference centre from the real cluster centre weakens the anisotropy of the radial angle distribution. Therefore, the expected radial alignment cannot be revealed if the distribution of spatial position angle is used instead of that of radial angle. The observed radial orientations of elliptical galaxies in cluster Abell 2744 are consistent with the simulated distribution.

  19. Trade-off study and computer simulation for assessing spacecraft pointing accuracy and stability capabilities

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.; Powers, Richard M.

    1997-05-01

    A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.

  20. Hyperuniformity, quasi-long-range correlations, and void-space constraints in maximally random jammed particle packings. II. Anisotropy in particle shape.

    PubMed

    Zachary, Chase E; Jiao, Yang; Torquato, Salvatore

    2011-05-01

    We extend the results from the first part of this series of two papers by examining hyperuniformity in heterogeneous media composed of impenetrable anisotropic inclusions. Specifically, we consider maximally random jammed (MRJ) packings of hard ellipses and superdisks and show that these systems both possess vanishing infinite-wavelength local-volume-fraction fluctuations and quasi-long-range pair correlations scaling as r(-(d+1)) in d Euclidean dimensions. Our results suggest a strong generalization of a conjecture by Torquato and Stillinger [Phys. Rev. E 68, 041113 (2003)], namely, that all strictly jammed saturated packings of hard particles, including those with size and shape distributions, are hyperuniform with signature quasi-long-range correlations. We show that our arguments concerning the constrained distribution of the void space in MRJ packings directly extend to hard-ellipse and superdisk packings, thereby providing a direct structural explanation for the appearance of hyperuniformity and quasi-long-range correlations in these systems. Additionally, we examine general heterogeneous media with anisotropic inclusions and show unexpectedly that one can decorate a periodic point pattern to obtain a hard-particle system that is not hyperuniform with respect to local-volume-fraction fluctuations. This apparent discrepancy can also be rationalized by appealing to the irregular distribution of the void space arising from the anisotropic shapes of the particles. Our work suggests the intriguing possibility that the MRJ states of hard particles share certain universal features independent of the local properties of the packings, including the packing fraction and average contact number per particle.

  1. Spatial association of marine dockage with land-borne infestations of invasive termites (Isoptera: Rhinotermitidae: Coptotermes) in urban south Florida.

    PubMed

    Hochmair, Hartwig H; Scheffrahn, Rudolf H

    2010-08-01

    Marine vessels have been implicated in the anthropogenic dispersal of invasive termites for the past 500 yr. It has long been suspected that two invasive termites, the Formosan subterranean termite, Coptotermes formosanus Shiraki, and Coptotermes gestroi (Wasmann) (Isoptera: Rhinotermitidae), were introduced to and dispersed throughout South Florida by sailboats and yachts. We compared the distances between 190 terrestrial point records for Formosan subterranean termite, 177 records for C. gestroi, and random locations with the nearest marine dockage by using spatial analysis. Results show that the median distance to nearest docks associated with C. gestroi is significantly smaller than for the random points. Results also reveal that the median distance to nearest docks associated with Formosan subterranean termite is significantly smaller than for the random points. These results support the hypothesis that C. gestroi and Formosan subterranean termite are significantly closer to potential infested boat locations, i.e., marine docks, than random points in these urban areas. The results of our study suggest yet another source of aggregation in the context of exotic species, namely, hubs for pleasure boating.

  2. The Miniaturization of the AFIT Random Noise Radar

    DTIC Science & Technology

    2013-03-01

    RANDOM NOISE RADAR I. Introduction Recent advances in technology and signal processing techniques have opened thedoor to using an ultra-wide band random...AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training

  3. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  4. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  5. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  6. A randomized trial of social media from Circulation.

    PubMed

    Fox, Caroline S; Bonaca, Marc A; Ryan, John J; Massaro, Joseph M; Barry, Karen; Loscalzo, Joseph

    2015-01-06

    Medical journals use social media to distribute the findings of published articles. Whether social media exposure to original articles improves article impact metrics is uncertain. Articles were randomized to receive targeted social media exposure from Circulation, including postings on the journal's Facebook and Twitter feeds. The primary end point was 30-day article page views. We conducted an intention-to-treat analysis comparing article page views by the Wilcoxon Rank sum test between articles randomized to social media as compared with those in the control group, which received no social media from Circulation. Prespecified subgroups included article type (population/clinical/basic), US versus non-US corresponding author, and whether the article received an editorial. Overall, 243 articles were randomized: 121 in the social media arm and 122 in the control arm. There was no difference in median 30-day page views (409 [social media] versus 392 [control], P=0.80). No differences were observed by article type (clinical, population, or basic science; P=0.19), whether an article had an editorial (P=0.87), or whether the corresponding author was from the United States (P=0.73). A social media strategy for a cardiovascular journal did not increase the number of times an article was viewed. Further research is necessary to understand and quantify the ways in which social media can increase the impact of published cardiovascular research. © 2014 American Heart Association, Inc.

  7. Relative Pose Estimation Using Image Feature Triplets

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Rottensteiner, F.; Heipke, C.

    2015-03-01

    A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.

  8. Students perception on the usage of PowerPoint in learning calculus

    NASA Astrophysics Data System (ADS)

    Othman, Zarith Sofiah; Tarmuji, Nor Habibah; Hilmi, Zulkifli Ab Ghani

    2017-04-01

    Mathematics is a core subject in most of the science and technology courses and in some social sciences programs. However, the low achievement of students in the subject especially in topics such as Differentiation and Integration is always an issue. Many factors contribute to the low performance such as motivation, environment, method of learning, academic background and others. The purpose of this paper is to determine the perception of learning mathematics using PowerPoint on Integration concepts at the undergraduate level with respect to mathematics anxiety, learning enjoyment, mobility and learning satisfaction. The main content of the PowerPoint presentation focused on the integration method with historical elements as an added value. The study was conducted on 48 students randomly selected from students in computer and applied sciences program as experimental group. Questionnaires were distributed to students to explore their learning experiences. Another 51 students who were taught using the traditional chalkboard method were used as the control group. Both groups were given a test on Integration. The statistical methods used were descriptive statistics and independent sample t-test between the experimental and the control group. The finding showed that most students perceived positively to the PowerPoint presentations with respect to mobility and learning satisfaction. The experimental group performed better than the control group.

  9. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  10. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  11. Safety of Simultaneous Coronary Artery Bypass Grafting and Carotid Endarterectomy Versus Isolated Coronary Artery Bypass Grafting: A Randomized Clinical Trial.

    PubMed

    Weimar, Christian; Bilbilis, Konstantinos; Rekowski, Jan; Holst, Torulv; Beyersdorf, Friedhelm; Breuer, Martin; Dahm, Manfred; Diegeler, Anno; Kowalski, Arne; Martens, Sven; Mohr, Friedrich W; Ondrášek, Jiri; Reiter, Beate; Roth, Peter; Seipelt, Ralf; Siggelkow, Markus; Steinhoff, Gustav; Moritz, Anton; Wilhelmi, Mathias; Wimmer-Greinecker, Gerhard; Diener, Hans-Christoph; Jakob, Heinz; Ose, Claudia; Scherag, Andre; Knipp, Stephan C

    2017-10-01

    The optimal operative strategy in patients with severe carotid artery disease undergoing coronary artery bypass grafting (CABG) is unknown. We sought to investigate the safety and efficacy of synchronous combined carotid endarterectomy and CABG as compared with isolated CABG. Patients with asymptomatic high-grade carotid artery stenosis ≥80% according to ECST (European Carotid Surgery Trial) ultrasound criteria (corresponding to ≥70% NASCET [North American Symptomatic Carotid Endarterectomy Trial]) who required CABG surgery were randomly assigned to synchronous carotid endarterectomy+CABG or isolated CABG. To avoid unbalanced prognostic factor distributions, randomization was stratified by center, age, sex, and modified Rankin Scale. The primary composite end point was the rate of stroke or death at 30 days. From 2010 to 2014, a total of 129 patients were enrolled at 17 centers in Germany and the Czech Republic. Because of withdrawal of funding after insufficient recruitment, enrolment was terminated early. At 30 days, the rate of any stroke or death in the intention-to-treat population was 12/65 (18.5%) in patients receiving synchronous carotid endarterectomy+CABG as compared with 6/62 (9.7%) in patients receiving isolated CABG (absolute risk reduction, 8.8%; 95% confidence interval, -3.2% to 20.8%; P WALD =0.12). Also for all secondary end points at 30 days and 1 year, there was no evidence for a significant treatment-group effect although patients undergoing isolated CABG tended to have better outcomes. Although our results cannot rule out a treatment-group effect because of lack of power, a superiority of the synchronous combined carotid endarterectomy+CABG approach seems unlikely. Five-year follow-up of patients is still ongoing. URL: https://www.controlled-trials.com. Unique identifier: ISRCTN13486906. Copyright © 2017 The Author(s).

  12. Radiation-related quality of life parameters after targeted intraoperative radiotherapy versus whole breast radiotherapy in patients with breast cancer: results from the randomized phase III trial TARGIT-A.

    PubMed

    Welzel, Grit; Boch, Angela; Sperk, Elena; Hofmann, Frank; Kraus-Tiefenbacher, Uta; Gerhardt, Axel; Suetterlin, Marc; Wenz, Frederik

    2013-01-07

    Intraoperative radiotherapy (IORT) is a new treatment approach for early stage breast cancer. This study reports on the effects of IORT on radiation-related quality of life (QoL) parameters. Two hundred and thirty women with stage I-III breast cancer (age, 31 to 84 years) were entered into the study. A single-center subgroup of 87 women from the two arms of the randomized phase III trial TARGIT-A (TARGeted Intra-operative radioTherapy versus whole breast radiotherapy for breast cancer) was analyzed. Furthermore, results were compared to non-randomized control groups: n = 90 receiving IORT as a tumor bed boost followed by external beam whole breast radiotherapy (EBRT) outside of TARGIT-A (IORT-boost), and n = 53 treated with EBRT followed by an external-beam boost (EBRT-boost). QoL was collected using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaires C30 (QLQ-C30) and BR23 (QLQ-BR23). The mean follow-up period in the TARGIT-A groups was 32 versus 39 months in the non-randomized control groups. Patients receiving IORT alone reported less general pain (21.3 points), breast (7.0 points) and arm (15.1 points) symptoms, and better role functioning (78.7 points) as patients receiving EBRT (40.9; 19.0; 32.8; and 60.5 points, respectively, P < 0.01). Patients receiving IORT alone also had fewer breast symptoms than TARGIT-A patients receiving IORT followed by EBRT for high risk features on final pathology (IORT-EBRT; 7.0 versus 29.7 points, P < 0.01). There were no significant differences between TARGIT-A patients receiving IORT-EBRT compared to non-randomized IORT-boost or EBRT-boost patients and patients receiving EBRT without a boost. In the randomized setting, important radiation-related QoL parameters after IORT were superior to EBRT. Non-randomized comparisons showed equivalent parameters in the IORT-EBRT group and the control groups.

  13. Averaging of elastic constants for polycrystals

    DOE PAGES

    Blaschke, Daniel N.

    2017-10-13

    Many materials of interest are polycrystals, i.e., aggregates of single crystals. Randomly distributed orientations of single crystals lead to macroscopically isotropic properties. Here in this paper, we briefly review strategies of calculating effective isotropic second and third order elastic constants from the single crystal ones. Our main emphasis is on single crystals of cubic symmetry. Specifically, the averaging of third order elastic constants has not been particularly successful in the past, and discrepancies have often been attributed to texturing of polycrystals as well as to uncertainties in the measurement of elastic constants of both poly and single crystals. While thismore » may well be true, we also point out here shortcomings in the theoretical averaging framework.« less

  14. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  15. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  16. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  17. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  18. The estimation of branching curves in the presence of subject-specific random effects.

    PubMed

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  19. A randomized, controlled clinical trial of intravenous lipid emulsion as an adjunctive treatment for permethrin toxicosis in cats.

    PubMed

    Peacock, Rachel E; Hosgood, Giselle; Swindells, Katrin L; Smart, Lisa

    2015-01-01

    To assess for any clinical benefit of intravenous lipid emulsion (ILE) for permethrin toxicosis in cats by comparing the progression of clinical signs of cats before and after treatment with ILE to cats treated with a saline control. To accomplish this objective, a clinical staging system for cats with permethrin toxicosis was developed and validated. Prospective, multicenter, randomized, controlled clinical trial. University veterinary teaching hospital and 12 private veterinary emergency hospitals. Thirty-four client-owned cats with permethrin toxicosis. A clinical staging system was designed based on abnormalities found on physical examination of cats with permethrin toxicosis. The clinical staging system had 6 stages, ranging from Stage A for cats with no abnormalities to Stage F for cats with grand mal seizures. The system was validated for intraviewer and interviewer variability. Cats in the clinical trial were randomized to receive 15 mL/kg of either intravenous 0.9% saline (control) or 20% ILE over 60 minutes. For each cat, a clinical stage was recorded at set time points before and after the randomized treatment was administered. The distribution of clinical stage stratified over time was compared across treatment groups. The clinical staging system showed excellent repeatability (P = 1.0) and reliability (P = 1.0). In the clinical trial, there was a significant difference in the distribution of clinical stages over time (P < 0.001) and from presentation stage to Stage B (P = 0.006), with ILE-treated cats (n = 20) having lower clinical stages earlier than control cats (n = 14). There was no significant difference in signalment, body weight, or supportive treatment between the groups. The clinical staging system was repeatable and reliable. Clinical stages of permethrin toxicosis in ILE-treated cats improved earlier compared to control cats, suggesting ILE may be a useful adjunctive therapy in the treatment of permethrin toxicosis in cats. © Veterinary Emergency and Critical Care Society 2015.

  20. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

Top