Sample records for initially randomly distributed

  1. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  2. Micromechanical analysis of composites with fibers distributed randomly over the transverse cross-section

    NASA Astrophysics Data System (ADS)

    Weng, Jingmeng; Wen, Weidong; Cui, Haitao; Chen, Bo

    2018-06-01

    A new method to generate the random distribution of fibers in the transverse cross-section of fiber reinforced composites with high fiber volume fraction is presented in this paper. Based on the microscopy observation of the transverse cross-sections of unidirectional composite laminates, hexagon arrangement is set as the initial arrangement status, and the initial velocity of each fiber is arbitrary at an arbitrary direction, the micro-scale representative volume element (RVE) is established by simulating perfectly elastic collision. Combined with the proposed periodic boundary conditions which are suitable for multi-axial loading, the effective elastic properties of composite materials can be predicted. The predicted properties show reasonable agreement with experimental results. By comparing the stress field of RVE with fibers distributed randomly and RVE with fibers distributed periodically, the predicted elastic modulus of RVE with fibers distributed randomly is greater than RVE with fibers distributed periodically.

  3. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  4. Knowledge Representation for Decision Making Agents

    DTIC Science & Technology

    2013-07-15

    knowledge map. This knowledge map is a dictionary data structure called tmap in the code. It represents a network of locations with a number [0,1...fillRandom(): Informed initial tmap distribution (randomly generated per node) with belief one. • initialBelief = 3 uses fillCenter(): normal...triggered on AllMyFMsHaveBeenInitialized. 2. Executes main.py • Initializes knowledge map labeled tmap . • Calls initialize search() – resets distanceTot and

  5. Persistent fluctuations in synchronization rate in globally coupled oscillators with periodic external forcing

    NASA Astrophysics Data System (ADS)

    Atsumi, Yu; Nakao, Hiroya

    2012-05-01

    A system of phase oscillators with repulsive global coupling and periodic external forcing undergoing asynchronous rotation is considered. The synchronization rate of the system can exhibit persistent fluctuations depending on parameters and initial phase distributions, and the amplitude of the fluctuations scales with the system size for uniformly random initial phase distributions. Using the Watanabe-Strogatz transformation that reduces the original system to low-dimensional macroscopic equations, we show that the fluctuations are collective dynamics of the system corresponding to low-dimensional trajectories of the reduced equations. It is argued that the amplitude of the fluctuations is determined by the inhomogeneity of the initial phase distribution, resulting in system-size scaling for the random case.

  6. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  7. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  8. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  9. The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2014-01-01

    A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.

  10. Effects of random initial conditions on the dynamical scaling behaviors of a fixed-energy Manna sandpile model in one dimension

    NASA Astrophysics Data System (ADS)

    Kwon, Sungchul; Kim, Jin Min

    2015-01-01

    For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.

  11. Anomalous transport in disordered fracture networks: Spatial Markov model for dispersion with variable injection modes

    NASA Astrophysics Data System (ADS)

    Kang, Peter K.; Dentz, Marco; Le Borgne, Tanguy; Lee, Seunghak; Juanes, Ruben

    2017-08-01

    We investigate tracer transport on random discrete fracture networks that are characterized by the statistics of the fracture geometry and hydraulic conductivity. While it is well known that tracer transport through fractured media can be anomalous and particle injection modes can have major impact on dispersion, the incorporation of injection modes into effective transport modeling has remained an open issue. The fundamental reason behind this challenge is that-even if the Eulerian fluid velocity is steady-the Lagrangian velocity distribution experienced by tracer particles evolves with time from its initial distribution, which is dictated by the injection mode, to a stationary velocity distribution. We quantify this evolution by a Markov model for particle velocities that are equidistantly sampled along trajectories. This stochastic approach allows for the systematic incorporation of the initial velocity distribution and quantifies the interplay between velocity distribution and spatial and temporal correlation. The proposed spatial Markov model is characterized by the initial velocity distribution, which is determined by the particle injection mode, the stationary Lagrangian velocity distribution, which is derived from the Eulerian velocity distribution, and the spatial velocity correlation length, which is related to the characteristic fracture length. This effective model leads to a time-domain random walk for the evolution of particle positions and velocities, whose joint distribution follows a Boltzmann equation. Finally, we demonstrate that the proposed model can successfully predict anomalous transport through discrete fracture networks with different levels of heterogeneity and arbitrary tracer injection modes.

  12. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  13. Maronutrient distribution in 'Tifblue' rabbiteye blueberry

    USDA-ARS?s Scientific Manuscript database

    This study was developed and initiated to determine the nutrient distribution within a ‘Tifblue’ rabbiteye blueberry. Rooted cuttings were potted into 3.8 liter containers and placed into a completely randomized design on a covered bench. Plants were divided evenly into 3 groups for low, high a...

  14. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  16. Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality

    NASA Astrophysics Data System (ADS)

    Kearney, Michael J.; Martin, Richard J.

    2018-01-01

    A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.

  17. A distribution model for the aerial application of granular agricultural particles

    NASA Technical Reports Server (NTRS)

    Fernandes, S. T.; Ormsbee, A. I.

    1978-01-01

    A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.

  18. Probabilistic analysis of structures involving random stress-strain behavior

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Thacker, B. H.; Harren, S. V.

    1991-01-01

    The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.

  19. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  20. Simulation of Stochastic Processes by Coupled ODE-PDE

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.

  1. Does Mass Azithromycin Distribution Impact Child Growth and Nutrition in Niger? A Cluster-Randomized Trial

    PubMed Central

    Amza, Abdou; Yu, Sun N.; Kadri, Boubacar; Nassirou, Baido; Stoller, Nicole E.; Zhou, Zhaoxia; West, Sheila K.; Bailey, Robin L.; Gaynor, Bruce D.; Keenan, Jeremy D.; Porco, Travis C.; Lietman, Thomas M.

    2014-01-01

    Background Antibiotic use on animals demonstrates improved growth regardless of whether or not there is clinical evidence of infectious disease. Antibiotics used for trachoma control may play an unintended benefit of improving child growth. Methodology In this sub-study of a larger randomized controlled trial, we assess anthropometry of pre-school children in a community-randomized trial of mass oral azithromycin distributions for trachoma in Niger. We measured height, weight, and mid-upper arm circumference (MUAC) in 12 communities randomized to receive annual mass azithromycin treatment of everyone versus 12 communities randomized to receive biannual mass azithromycin treatments for children, 3 years after the initial mass treatment. We collected measurements in 1,034 children aged 6–60 months of age. Principal Findings We found no difference in the prevalence of wasting among children in the 12 annually treated communities that received three mass azithromycin distributions compared to the 12 biannually treated communities that received six mass azithromycin distributions (odds ratio = 0.88, 95% confidence interval = 0.53 to 1.49). Conclusions/Significance We were unable to demonstrate a statistically significant difference in stunting, underweight, and low MUAC of pre-school children in communities randomized to annual mass azithromycin treatment or biannual mass azithromycin treatment. The role of antibiotics on child growth and nutrition remains unclear, but larger studies and longitudinal trials may help determine any association. PMID:25210836

  2. A quantitative approach to the topology of large-scale structure. [for galactic clustering computation

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.

  3. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  4. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  5. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  6. Synchronization of an ensemble of oscillators regulated by their spatial movement.

    PubMed

    Sarkar, Sumantra; Parmananda, P

    2010-12-01

    Synchronization for a collection of oscillators residing in a finite two dimensional plane is explored. The coupling between any two oscillators in this array is unidirectional, viz., master-slave configuration. Initially the oscillators are distributed randomly in space and their autonomous time-periods follow a Gaussian distribution. The duty cycles of these oscillators, which work under an on-off scenario, are normally distributed as well. It is realized that random hopping of oscillators is a necessary condition for observing global synchronization in this ensemble of oscillators. Global synchronization in the context of the present work is defined as the state in which all the oscillators are rendered identical. Furthermore, there exists an optimal amplitude of random hopping for which the attainment of this global synchronization is the fastest. The present work is deemed to be of relevance to the synchronization phenomena exhibited by pulse coupled oscillators such as a collection of fireflies. © 2010 American Institute of Physics.

  7. Generation mechanism of nonlinear ultrasonic Lamb waves in thin plates with randomly distributed micro-cracks.

    PubMed

    Zhao, Youxuan; Li, Feilong; Cao, Peng; Liu, Yaolu; Zhang, Jianyu; Fu, Shaoyun; Zhang, Jun; Hu, Ning

    2017-08-01

    Since the identification of micro-cracks in engineering materials is very valuable in understanding the initial and slight changes in mechanical properties of materials under complex working environments, numerical simulations on the propagation of the low frequency S 0 Lamb wave in thin plates with randomly distributed micro-cracks were performed to study the behavior of nonlinear Lamb waves. The results showed that while the influence of the randomly distributed micro-cracks on the phase velocity of the low frequency S 0 fundamental waves could be neglected, significant ultrasonic nonlinear effects caused by the randomly distributed micro-cracks was discovered, which mainly presented as a second harmonic generation. By using a Monte Carlo simulation method, we found that the acoustic nonlinear parameter increased linearly with the micro-crack density and the size of micro-crack zone, and it was also related to the excitation frequency and friction coefficient of the micro-crack surfaces. In addition, it was found that the nonlinear effect of waves reflected by the micro-cracks was more noticeable than that of the transmitted waves. This study theoretically reveals that the low frequency S 0 mode of Lamb waves can be used as the fundamental waves to quantitatively identify micro-cracks in thin plates. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  9. Establishing the kinetics of ballistic-to-diffusive transition using directional statistics

    NASA Astrophysics Data System (ADS)

    Liu, Pai; Heinson, William R.; Sumlin, Benjamin J.; Shen, Kuan-Yu; Chakrabarty, Rajan K.

    2018-04-01

    We establish the kinetics of ballistic-to-diffusive (BD) transition observed in two-dimensional random walk using directional statistics. Directional correlation is parameterized using the walker's turning angle distribution, which follows the commonly adopted wrapped Cauchy distribution (WCD) function. During the BD transition, the concentration factor (ρ) governing the WCD shape is observed to decrease from its initial value. We next analytically derive the relationship between effective ρ and time, which essentially quantifies the BD transition rate. The prediction of our kinetic expression agrees well with the empirical datasets obtained from correlated random walk simulation. We further connect our formulation with the conventionally used scaling relationship between the walker's mean-square displacement and time.

  10. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  11. Effect of randomness in logistic maps

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2015-01-01

    We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However , averaged over different realizations reaches a fixed point. For 1 ≤ at ≤ 4, the system shows nonchaotic behavior and the Lyapunov exponent is strongly dependent on the asymmetry of the distribution from which at is drawn. Chaotic behavior is seen to occur beyond a threshold value of q1(q2) when q2(q1) is varied. The most striking result is that the random map is chaotic even when q2 is less than the threshold value 3.5699⋯ at which chaos occurs in the nonrandom map. We also employ a different method in which a different set of random variables are used for the evolution of two initially identical x values, here the chaotic regime exists for all q1 ≠ q2 values.

  12. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  13. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  14. Numerical Modeling of S-Wave Generation by Fracture Damage in Underground Nuclear Explosions

    DTIC Science & Technology

    2009-09-30

    Element Package, ABAQUS. A user -defined subroutine , VUMAT, was written that incorporates the micro-mechanics based damage constitutive law described...dynamic damage evolution on the elastic and anelastic response. 2) whereas the Ashby/Sammis model was only applicable to the case where the initial cracks ...are all parallel and the same size, we can now include a specified distribution of initial crack sizes with random azimuthal orientation about the

  15. Significant locations in auxiliary data as seeds for typical use cases of point clustering

    NASA Astrophysics Data System (ADS)

    Kröger, Johannes

    2018-05-01

    Random greedy clustering and grid-based clustering are highly susceptible by their initial parameters. When used for point data clustering in maps they often change the apparent distribution of the underlying data. We propose a process that uses precomputed weighted seed points for the initialization of clusters, for example from local maxima in population density data. Exemplary results from the clustering of a dataset of petrol stations are presented.

  16. Transport of secondary electrons and reactive species in ion tracks

    NASA Astrophysics Data System (ADS)

    Surdutovich, Eugene; Solov'yov, Andrey V.

    2015-08-01

    The transport of reactive species brought about by ions traversing tissue-like medium is analysed analytically. Secondary electrons ejected by ions are capable of ionizing other molecules; the transport of these generations of electrons is studied using the random walk approximation until these electrons remain ballistic. Then, the distribution of solvated electrons produced as a result of interaction of low-energy electrons with water molecules is obtained. The radial distribution of energy loss by ions and secondary electrons to the medium yields the initial radial dose distribution, which can be used as initial conditions for the predicted shock waves. The formation, diffusion, and chemical evolution of hydroxyl radicals in liquid water are studied as well. COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy.

  17. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-03-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  18. Time-dependent Hartree-Fock approach to nuclear ``pasta'' at finite temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-05-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature. In addition, we propose the variance in the cell density distribution as a measure to distinguish pasta matter from uniform matter.

  19. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  20. A probabilistic fatigue analysis of multiple site damage

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.

    1994-01-01

    The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.

  1. Reliability of stiffened structural panels: Two examples

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Davis, D. Dale, Jr.; Maring, Lise D.; Krishnamurthy, Thiagaraja; Elishakoff, Isaac

    1992-01-01

    The reliability of two graphite-epoxy stiffened panels that contain uncertainties is examined. For one panel, the effect of an overall bow-type initial imperfection is studied. The size of the bow is assumed to be a random variable. The failure mode is buckling. The benefits of quality control are explored by using truncated distributions. For the other panel, the effect of uncertainties in a strain-based failure criterion is studied. The allowable strains are assumed to be random variables. A geometrically nonlinear analysis is used to calculate a detailed strain distribution near an elliptical access hole in a wing panel that was tested to failure. Calculated strains are used to predict failure. Results are compared with the experimental failure load of the panel.

  2. Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph

    DTIC Science & Technology

    2014-07-01

    distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more

  3. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  4. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  5. Biased random key genetic algorithm with insertion and gender selection for capacitated vehicle routing problem with time windows

    NASA Astrophysics Data System (ADS)

    Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri

    2017-06-01

    Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

  6. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  7. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  8. On Testability of Missing Data Mechanisms in Incomplete Data Sets

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2011-01-01

    This article is concerned with the question of whether the missing data mechanism routinely referred to as missing completely at random (MCAR) is statistically examinable via a test for lack of distributional differences between groups with observed and missing data, and related consequences. A discussion is initially provided, from a formal logic…

  9. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  10. Direct Observations of Nucleation in a Nondilute Multicomponent Alloy

    NASA Technical Reports Server (NTRS)

    Sudbrack, Chantal K.; Noebe, Ronald D.; Seidman, David N.

    2006-01-01

    The chemical pathways leading to gamma'(L1(sub 2)) nucleation from nondilute Ni-5.2 Al-14.2 Cr at. %, gama(fcc), at 873 K are followed with radial distribution functions and isoconcentration surface analyses of direct-space atom-probe tomographic images. Although Cr atoms initially are randomly distributed, a distribution of congruent Ni3Al short-range-order domains (SRO), [R] approx. equals 0.6 nm, results from Al diffusion during quenching. Domain site occupancy develops as their number density increases leading to Al-rich phase separation by gamma'-nucleation, [R]=0.75 nm, after SRO occurs.

  11. Signs of universality in the structure of culture

    NASA Astrophysics Data System (ADS)

    Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego

    2017-11-01

    Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.

  12. Texture and anisotropy in ferroelectric lead metaniobate

    NASA Astrophysics Data System (ADS)

    Iverson, Benjamin John

    Ferroelectric lead metaniobate, PbNb2O6, is a piezoelectric ceramic typically used because of its elevated Curie temperature and anisotropic properties. However, the piezoelectric constant, d33, is relatively low in randomly oriented ceramics when compared to other ferroelectrics. Crystallographic texturing is often employed to increase the piezoelectric constant because the spontaneous polarization axes of grains are better aligned. In this research, crystallographic textures induced through tape casting are distinguished from textures induced through electrical poling. Texture is described using multiple quantitative approaches utilizing X-ray and neutron time-of-flight diffraction. Tape casting lead metaniobate with an inclusion of acicular template particles induces an orthotropic texture distribution. Templated grain growth from seed particles oriented during casting results in anisotropic grain structures. The degree of preferred orientation is directly linked to the shear behavior of the tape cast slurry. Increases in template concentration, slurry viscosity, and casting velocity lead to larger textures by inducing more particle orientation in the tape casting plane. The maximum 010 texture distributions were two and a half multiples of a random distribution. Ferroelectric texture was induced by electrical poling. Electric poling increases the volume of material oriented with the spontaneous polarization direction in the material. Samples with an initial paraelectric texture exhibit a greater change in the domain volume fraction during electrical poling than randomly oriented ceramics. In tape cast samples, the resulting piezoelectric response is proportional to the 010 texture present prior to poling. This results in property anisotropy dependent on initial texture. Piezoelectric properties measured on the most textured ceramics were similar to those obtained with a commercial standard.

  13. Inferring animal densities from tracking data using Markov chains.

    PubMed

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  14. Quantum cryptography for secure free-space communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R.J.; Buttler, W.T.; Kwiat, P.G.

    1999-03-01

    The secure distribution of the secret random bit sequences known as key material, is an essential precursor to their use for the encryption and decryption of confidential communications. Quantum cryptography is a new technique for secure key distribution with single-photon transmissions: Heisenberg`s uncertainty principle ensures that an adversary can neither successfully tap the key transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). The authors have developed experimental quantum cryptography systems based on the transmission of non-orthogonal photon polarization states to generate shared key material over line-of-sight optical links. Key material is built up usingmore » the transmission of a single-photon per bit of an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. The authors have developed and tested a free-space quantum key distribution (QKD) system over an outdoor optical path of {approximately}1 km at Los Alamos National Laboratory under nighttime conditions. Results show that free-space QKD can provide secure real-time key distribution between parties who have a need to communicate secretly. Finally, they examine the feasibility of surface to satellite QKD.« less

  15. Secure communications using quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R.J.; Buttler, W.T.; Kwiat, P.G.

    1997-08-01

    The secure distribution of the secret random bit sequences known as {open_quotes}key{close_quotes} material, is an essential precursor to their use for the encryption and decryption of confidential communications. Quantum cryptography is an emerging technology for secure key distribution with single-photon transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). We have developed experimental quantum cryptography systems based on the transmission of non-orthogonal single-photon states to generate shared key material over multi-kilometer optical fiber paths and over line-of-sight links. In both cases, key material is built up using the transmission of a single-photon per bit ofmore » an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. In our optical fiber experiment we have performed quantum key distribution over 24-km of underground optical fiber using single-photon interference states, demonstrating that secure, real-time key generation over {open_quotes}open{close_quotes} multi-km node-to-node optical fiber communications links is possible. We have also constructed a quantum key distribution system for free-space, line-of-sight transmission using single-photon polarization states, which is currently undergoing laboratory testing. 7 figs.« less

  16. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  17. A Randomized Trial Assessing the Impact of a Personal Printed Feedback Portrait on Statin Prescribing in Primary Care

    ERIC Educational Resources Information Center

    Dormuth, Colin R.; Carney, Greg; Taylor, Suzanne; Bassett, Ken; Maclure, Malcolm

    2012-01-01

    Introduction: Knowledge translation (KT) initiatives have the potential to improve prescribing quality and produce savings that exceed the cost of the KT program itself, including the cost of evaluation using pragmatic study methods. Our objective was to measure the impact and estimated savings resulting from the distribution of individualized…

  18. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  19. Multi-Scale Fracture Mechanics of 3-D Reinforced Composites

    DTIC Science & Technology

    2010-02-26

    cohesive energy over the interface between plies n and n+1, separated by the horizontal surface z= zn is w/ g(KB)ds (16) In this case the normal vector...where INP is the total number of integration points and V„ is the volume of the n-th ply. Note that the random distribution of initial strength ( 31

  20. Flake Orientation Effects On Physical and Mechanical Properties of Sweetgum Flakeboard

    Treesearch

    T.F. Shupe; Chung-Yun Hse; E.W. Price

    2001-01-01

    Research was initiated to determine the effect of flake orientation on the physical and mechanical properties offlakeboard. The panel fabrication techniques investigated were single-layer panels with random and oriented flake distribution, three-layer, five-layer, and seven-layer panels. Single-layer oriented panels had panel directional property ratios of 11.8 and 12....

  1. Emergence of small-world structure in networks of spiking neurons through STDP plasticity.

    PubMed

    Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas

    2011-01-01

    In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.

  2. Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios

    2017-02-01

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations ( δB / B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points aremore » acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker–Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker and Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path ( λ {sub sc}) of the particles between the scatterers inside the energization volume.« less

  4. Human Capital Planning in Higher Education Institutions: A Strategic Human Resource Development Initiative in Jordan

    ERIC Educational Resources Information Center

    Khasawneh, Samer

    2011-01-01

    Purpose: The primary purpose of this study is to determine the status of human capital planning in higher education institutions in Jordan. Design/methodology/approach: A random sample of 120 faculty members (in administrative positions) responded to a human capital planning (HCP) survey. The survey consisted of a pool of 38 items distributed over…

  5. Rogue waves and large deviations in deep sea.

    PubMed

    Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric

    2018-01-30

    The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.

  6. Role of streams in myxobacteria aggregate formation

    NASA Astrophysics Data System (ADS)

    Kiskowski, Maria A.; Jiang, Yi; Alber, Mark S.

    2004-10-01

    Cell contact, movement and directionality are important factors in biological development (morphogenesis), and myxobacteria are a model system for studying cell-cell interaction and cell organization preceding differentiation. When starved, thousands of myxobacteria cells align, stream and form aggregates which later develop into round, non-motile spores. Canonically, cell aggregation has been attributed to attractive chemotaxis, a long range interaction, but there is growing evidence that myxobacteria organization depends on contact-mediated cell-cell communication. We present a discrete stochastic model based on contact-mediated signaling that suggests an explanation for the initialization of early aggregates, aggregation dynamics and final aggregate distribution. Our model qualitatively reproduces the unique structures of myxobacteria aggregates and detailed stages which occur during myxobacteria aggregation: first, aggregates initialize in random positions and cells join aggregates by random walk; second, cells redistribute by moving within transient streams connecting aggregates. Streams play a critical role in final aggregate size distribution by redistributing cells among fewer, larger aggregates. The mechanism by which streams redistribute cells depends on aggregate sizes and is enhanced by noise. Our model predicts that with increased internal noise, more streams would form and streams would last longer. Simulation results suggest a series of new experiments.

  7. Onset of natural convection in a continuously perturbed system

    NASA Astrophysics Data System (ADS)

    Ghorbani, Zohreh; Riaz, Amir

    2017-11-01

    The convective mixing triggered by gravitational instability plays an important role in CO2 sequestration in saline aquifers. The linear stability analysis and the numerical simulation concerning convective mixing in porous media requires perturbations of small amplitude to be imposed on the concentration field in the form of an initial shape function. In aquifers, however, the instability is triggered by local porosity and permeability. In this work, we consider a canonical 2D homogeneous system where perturbations arise due to spatial variation of porosity in the system. The advantage of this approach is not only the elimination of the required initial shape function, but it also serves as a more realistic approach. Using a reduced nonlinear method, we first explore the effect of harmonic variations of porosity in the transverse and streamwise direction on the onset time of convection and late time behavior. We then obtain the optimal porosity structure that minimizes the convection onset. We further examine the effect of a random porosity distribution, that is independent of the spatial mode of porosity structure, on the convection onset. Using high-order pseudospectral DNS, we explore how the random distribution differs from the modal approach in predicting the onset time.

  8. Randomized controlled trial of mailed Nicotine Replacement Therapy to Canadian smokers: study protocol.

    PubMed

    Cunningham, John A; Leatherdale, Scott T; Selby, Peter L; Tyndale, Rachel F; Zawertailo, Laurie; Kushnir, Vladyslav

    2011-09-28

    Considerable public health efforts are ongoing Canada-wide to reduce the prevalence of smoking in the general population. From 1985 to 2005, smoking rates among adults decreased from 35% to 19%, however, since that time, the prevalence has plateaued at around 18-19%. To continue to reduce the number of smokers at the population level, one option has been to translate interventions that have demonstrated clinical efficacy into population level initiatives. Nicotine Replacement Therapy (NRT) has a considerable clinical research base demonstrating its efficacy and safety and thus public health initiatives in Canada and other countries are distributing NRT widely through the mail. However, one important question remains unanswered--do smoking cessation programs that involve mailed distribution of free NRT work? To answer this question, a randomized controlled trial is required. A single blinded, panel survey design with random assignment to an experimental and a control condition will be used in this study. A two-stage recruitment process will be employed, in the context of a general population survey with two follow-ups (8 weeks and 6 months). Random digit dialing of Canadian home telephone numbers will identify households with adult smokers (aged 18+ years) who are willing to take part in a smoking study that involves three interviews, with saliva collection for 3-HC/cotinine ratio measurement at baseline and saliva cotinine verification at 8-week and 6-month follow-ups (N = 3,000). Eligible subjects interested in free NRT will be determined at baseline (N = 1,000) and subsequently randomized into experimental and control conditions to receive versus not receive nicotine patches. The primary hypothesis is that subjects who receive nicotine patches will display significantly higher quit rates (as assessed by 30 day point prevalence of abstinence from tobacco) at 6-month follow-up as compared to subjects who do not receive nicotine patches at baseline. The findings from the proposed trial are timely and highly relevant as mailed distribution of NRT require considerable resources and there are limited public health dollars available to combat this substantial health concern. In addition, findings from this randomized controlled trial will inform the development of models to engage smokers to quit, incorporating proactive recruitment and the offer of evidence based treatment. ClinicalTrials.gov: NCT01429129.

  9. Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1991-01-01

    The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.

  10. Reheating-volume measure for random-walk inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winitzki, Sergei; Yukawa Institute of Theoretical Physics, Kyoto University, Kyoto

    2008-09-15

    The recently proposed 'reheating-volume' (RV) measure promises to solve the long-standing problem of extracting probabilistic predictions from cosmological multiverse scenarios involving eternal inflation. I give a detailed description of the new measure and its applications to generic models of eternal inflation of random-walk type. For those models I derive a general formula for RV-regulated probability distributions that is suitable for numerical computations. I show that the results of the RV cutoff in random-walk type models are always gauge invariant and independent of the initial conditions at the beginning of inflation. In a toy model where equal-time cutoffs lead to themore » 'youngness paradox', the RV cutoff yields unbiased results that are distinct from previously proposed measures.« less

  11. Clearing out a maze: A model of chemotactic motion in porous media

    NASA Astrophysics Data System (ADS)

    Schilling, Tanja; Voigtmann, Thomas

    2017-12-01

    We study the anomalous dynamics of a biased "hungry" (or "greedy") random walk on a percolating cluster. The model mimics chemotaxis in a porous medium: In close resemblance to the 1980s arcade game PAC-MA N ®, the hungry random walker consumes food, which is initially distributed in the maze, and biases its movement towards food-filled sites. We observe that the mean-squared displacement of the process follows a power law with an exponent that is different from previously known exponents describing passive or active microswimmer dynamics. The change in dynamics is well described by a dynamical exponent that depends continuously on the propensity to move towards food. It results in slower differential growth when compared to the unbiased random walk.

  12. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  13. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  14. Effects of perfluorohexane vapor on relative blood flow distribution in an animal model of surfactant-depleted lung injury

    NASA Technical Reports Server (NTRS)

    Hubler, Matthias; Souders, Jennifer E.; Shade, Erin D.; Polissar, Nayak L.; Bleyl, Jorg U.; Hlastala, Michael P.

    2002-01-01

    OBJECTIVE: To test the hypothesis that treatment with vaporized perfluorocarbon affects the relative pulmonary blood flow distribution in an animal model of surfactant-depleted acute lung injury. DESIGN: Prospective, randomized, controlled trial. SETTING: A university research laboratory. SUBJECTS: Fourteen New Zealand White rabbits (weighing 3.0-4.5 kg). INTERVENTIONS: The animals were ventilated with an FIO(2) of 1.0 before induction of acute lung injury. Acute lung injury was induced by repeated saline lung lavages. Eight rabbits were randomized to 60 mins of treatment with an inspiratory perfluorohexane vapor concentration of 0.2 in oxygen. To compensate for the reduced FIO(2) during perfluorohexane treatment, FIO(2) was reduced to 0.8 in control animals. Change in relative pulmonary blood flow distribution was assessed by using fluorescent-labeled microspheres. MEASUREMENTS AND MAIN RESULTS: Microsphere data showed a redistribution of relative pulmonary blood flow attributable to depletion of surfactant. Relative pulmonary blood flow shifted from areas that were initially high-flow to areas that were initially low-flow. During the study period, relative pulmonary blood flow of high-flow areas decreased further in the control group, whereas it increased in the treatment group. This difference was statistically significant between the groups (p =.02) as well as in the treatment group compared with the initial injury (p =.03). Shunt increased in both groups over time (control group, 30% +/- 10% to 63% +/- 20%; treatment group, 37% +/- 20% to 49% +/- 23%), but the changes compared with injury were significantly less in the treatment group (p =.03). CONCLUSION: Short treatment with perfluorohexane vapor partially reversed the shift of relative pulmonary blood flow from high-flow to low-flow areas attributable to surfactant depletion.

  15. Manual vs. integrated automatic load-distributing band CPR with equal survival after out of hospital cardiac arrest. The randomized CIRC trial.

    PubMed

    Wik, Lars; Olsen, Jan-Aage; Persse, David; Sterz, Fritz; Lozano, Michael; Brouwer, Marc A; Westfall, Mark; Souders, Chris M; Malzer, Reinhard; van Grunsven, Pierre M; Travis, David T; Whitehead, Anne; Herken, Ulrich R; Lerner, E Brooke

    2014-06-01

    To compare integrated automated load distributing band CPR (iA-CPR) with high-quality manual CPR (M-CPR) to determine equivalence, superiority, or inferiority in survival to hospital discharge. Between March 5, 2009 and January 11, 2011 a randomized, unblinded, controlled group sequential trial of adult out-of-hospital cardiac arrests of presumed cardiac origin was conducted at three US and two European sites. After EMS providers initiated manual compressions patients were randomized to receive either iA-CPR or M-CPR. Patient follow-up was until all patients were discharged alive or died. The primary outcome, survival to hospital discharge, was analyzed adjusting for covariates, (age, witnessed arrest, initial cardiac rhythm, enrollment site) and interim analyses. CPR quality and protocol adherence were monitored (CPR fraction) electronically throughout the trial. Of 4753 randomized patients, 522 (11.0%) met post enrollment exclusion criteria. Therefore, 2099 (49.6%) received iA-CPR and 2132 (50.4%) M-CPR. Sustained ROSC (emergency department admittance), 24h survival and hospital discharge (unknown for 12 cases) for iA-CPR compared to M-CPR were 600 (28.6%) vs. 689 (32.3%), 456 (21.8%) vs. 532 (25.0%), 196 (9.4%) vs. 233 (11.0%) patients, respectively. The adjusted odds ratio of survival to hospital discharge for iA-CPR compared to M-CPR, was 1.06 (95% CI 0.83-1.37), meeting the criteria for equivalence. The 20 min CPR fraction was 80.4% for iA-CPR and 80.2% for M-CPR. Compared to high-quality M-CPR, iA-CPR resulted in statistically equivalent survival to hospital discharge. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Computational micromechanics of dynamic compressive loading of a brittle polycrystalline material using a distribution of grain boundary properties

    NASA Astrophysics Data System (ADS)

    Kraft, R. H.; Molinari, J. F.; Ramesh, K. T.; Warner, D. H.

    A two-dimensional finite element model is used to investigate compressive loading of a brittle ceramic. Intergranular cracking in the microstructure is captured explicitly by using a distribution of cohesive interfaces. The addition of confining stress increases the maximum strength and if high enough, can allow the effective material response to reach large strains before failure. Increasing the friction at the grain boundaries also increases the maximum strength until saturation of the strength is approached. Above a transitional strain rate, increasing the rate-of-deformation also increases the strength and as the strain rate increases, fragment sizes of the damaged specimen decrease. The effects of flaws within the specimen were investigated using a random distribution at various initial flaw densities. The model is able to capture an effective modulus change and degradation of strength as the initial flaw density increases. Effects of confinement, friction, and spatial distribution of flaws seem to depend on the crack coalescence and dilatation of the specimen, while strain-rate effects are result of inertial resistance to motion.

  17. Evaluation of two school-based HIV prevention interventions in the border city of Tijuana, Mexico.

    PubMed

    Martinez-Donate, Ana P; Hovell, Melbourne F; Zellner, Jennifer; Sipan, Carol L; Blumberg, Elaine J; Carrizosa, Claudia

    2004-08-01

    This research project examined the individual and combined effectiveness of an HIV prevention workshop and a free condom distribution program in four high schools in Tijuana, Mexico. Adolescents (N = 320) completed baseline measures on sexual practices and theoretical correlates and participated in a two-part study. In Study 1, students were randomly assigned to an HIV prevention workshop or a control condition, with a 3-month follow-up assessment. Results indicate three significant workshop benefits regarding HIV transmission by altering sexual initiation, access to condoms, and traditional beliefs regarding condoms. In Study 2, we set up a condom distribution program at two of the participating schools, and students completed a 6-month follow-up assessment. Results indicate that exposure to the workshop followed by access to the condom distribution program yielded two beneficial results for reducing HIV transmission: moderating sexual initiation and increasing condom acquisition. Access to the condom distribution program alone had no effects on behavioral and psychosocial correlates of HIV transmission. We discuss implications of these results.

  18. Super-stable Poissonian structures

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.

  19. Effect of microstructure on the detonation initiation in energetic materials

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jackson, T. L.

    2017-12-01

    In this work we examine the role of the microstructure on detonation initiation of energetic materials. We solve the reactive Euler equations, with the energy equation augmented by a power deposition term. The deposition term is based on simulations of void collapse at the microscale, modeled at the mesoscale as hot-spots, while the reaction rate at the mesoscale is modeled using density-based kinetics. We carry out two-dimensional simulations of random packs of HMX crystals in a binder. We show that mean particle size, size distribution, and particle shape have a major effect on the transition between detonation and no-detonation, thus highlighting the importance of the microstructure for shock-induced initiation.

  20. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  1. Completely device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Aguilar, Edgar A.; Ramanathan, Ravishankar; Kofler, Johannes; Pawłowski, Marcin

    2016-08-01

    Quantum key distribution (QKD) is a provably secure way for two distant parties to establish a common secret key, which then can be used in a classical cryptographic scheme. Using quantum entanglement, one can reduce the necessary assumptions that the parties have to make about their devices, giving rise to device-independent QKD (DIQKD). However, in all existing protocols to date the parties need to have an initial (at least partially) random seed as a resource. In this work, we show that this requirement can be dropped. Using recent advances in the fields of randomness amplification and randomness expansion, we demonstrate that it is sufficient for the message the parties want to communicate to be (partially) unknown to the adversaries—an assumption without which any type of cryptography would be pointless to begin with. One party can use her secret message to locally generate a secret sequence of bits, which can then be openly used by herself and the other party in a DIQKD protocol. Hence our work reduces the requirements needed to perform secure DIQKD and establish safe communication.

  2. Sintering of polydisperse viscous droplets

    NASA Astrophysics Data System (ADS)

    Wadsworth, Fabian B.; Vasseur, Jérémie; Llewellin, Edward W.; Dingwell, Donald B.

    2017-03-01

    Sintering—or coalescence—of compacts of viscous droplets is driven by the interfacial tension between the droplets and the interstitial gas phase. The process, which occurs in a range of industrial and natural settings, such as the manufacture of ceramics and the welding of volcanic ash, causes the compact to densify, to become stronger, and to become less permeable. We investigate the role of droplet polydispersivity in sintering dynamics by conducting experiments in which populations of glass spheres with different size distributions are heated to temperatures above the glass transition interval. We quantify the progress of sintering by tracking changes in porosity with time. The sintering dynamics is modeled by treating the system as a random distribution of interstitial gas bubbles shrinking under the action of interfacial tension only. We identify the scaling between the polydispersivity of the initial droplets and the dynamics of bulk densification. The framework that we develop allows the sintering dynamics of arbitrary polydisperse populations of droplets to be predicted if the initial droplet (or particle) size distribution is known.

  3. Finite GUE Distribution with Cut-Off at a Shock

    NASA Astrophysics Data System (ADS)

    Ferrari, P. L.

    2018-03-01

    We consider the totally asymmetric simple exclusion process with initial conditions generating a shock. The fluctuations of particle positions are asymptotically governed by the randomness around the two characteristic lines joining at the shock. Unlike in previous papers, we describe the correlation in space-time without employing the mapping to the last passage percolation, which fails to exists already for the partially asymmetric model. We then consider a special case, where the asymptotic distribution is a cut-off of the distribution of the largest eigenvalue of a finite GUE matrix. Finally we discuss the strength of the probabilistic and physically motivated approach and compare it with the mathematical difficulties of a direct computation.

  4. Assessing the significance of global and local correlations under spatial autocorrelation: a nonparametric approach.

    PubMed

    Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor

    2014-06-01

    We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.

  5. Simple techniques for improving deep neural network outcomes on commodity hardware

    NASA Astrophysics Data System (ADS)

    Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.

    2017-08-01

    We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.

  6. A Simple Physical Model for Spall from Nuclear Explosions Based Upon Two-Dimensional Nonlinear Numerical Simulations

    DTIC Science & Technology

    1990-05-01

    forms included (1) analytic distribu- tions of initial velocities which initiate at the same instant across the crack ( t o is con - stant), (2) random...gAH(O,tl) + (19) [jLVgf (Vg)- gMo (vg ,V2 )]AH(t1l,t2) We note that for any distribution d)(v), the high frequency response will be dominated by the 8...body waves from the tension crack model is a narrowband signal. To see this, consider Equation (25). As w-O, P (co) approaches a constant pro

  7. Quantifying Rock Weakening Due to Decreasing Calcite Mineral Content by Numerical Simulations

    PubMed Central

    2018-01-01

    The quantification of changes in geomechanical properties due to chemical reactions is of paramount importance for geological subsurface utilisation, since mineral dissolution generally reduces rock stiffness. In the present study, the effective elastic moduli of two digital rock samples, the Fontainebleau and Bentheim sandstones, are numerically determined based on micro-CT images. Reduction in rock stiffness due to the dissolution of 10% calcite cement by volume out of the pore network is quantified for three synthetic spatial calcite distributions (coating, partial filling and random) using representative sub-cubes derived from the digital rock samples. Due to the reduced calcite content, bulk and shear moduli decrease by 34% and 38% in maximum, respectively. Total porosity is clearly the dominant parameter, while spatial calcite distribution has a minor impact, except for a randomly chosen cement distribution within the pore network. Moreover, applying an initial stiffness reduced by 47% for the calcite cement results only in a slightly weaker mechanical behaviour. Using the quantitative approach introduced here substantially improves the accuracy of predictions in elastic rock properties compared to general analytical methods, and further enables quantification of uncertainties related to spatial variations in porosity and mineral distribution. PMID:29614776

  8. Quantifying Rock Weakening Due to Decreasing Calcite Mineral Content by Numerical Simulations.

    PubMed

    Wetzel, Maria; Kempka, Thomas; Kühn, Michael

    2018-04-01

    The quantification of changes in geomechanical properties due to chemical reactions is of paramount importance for geological subsurface utilisation, since mineral dissolution generally reduces rock stiffness. In the present study, the effective elastic moduli of two digital rock samples, the Fontainebleau and Bentheim sandstones, are numerically determined based on micro-CT images. Reduction in rock stiffness due to the dissolution of 10% calcite cement by volume out of the pore network is quantified for three synthetic spatial calcite distributions (coating, partial filling and random) using representative sub-cubes derived from the digital rock samples. Due to the reduced calcite content, bulk and shear moduli decrease by 34% and 38% in maximum, respectively. Total porosity is clearly the dominant parameter, while spatial calcite distribution has a minor impact, except for a randomly chosen cement distribution within the pore network. Moreover, applying an initial stiffness reduced by 47% for the calcite cement results only in a slightly weaker mechanical behaviour. Using the quantitative approach introduced here substantially improves the accuracy of predictions in elastic rock properties compared to general analytical methods, and further enables quantification of uncertainties related to spatial variations in porosity and mineral distribution.

  9. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    PubMed

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter

    2012-01-01

    A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  10. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  11. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  12. Micro-Loans, Insecticide-Treated Bednets, and Malaria: Evidence from a Randomized Controlled Trial in Orissa, India.

    PubMed

    Tarozzi, Alessandro; Mahajan, Aprajit; Blackburn, Brian; Kopf, Dan; Krishnan, Lakshmi; Yoong, Joanne

    2014-07-01

    We describe findings from the first large-scale cluster randomized controlled trial in a developing country that evaluates the uptake of a health-protecting technology, insecticide-treated bednets (ITNs), through micro-consumer loans, as compared to free distribution and control conditions. Despite a relatively high price, 52 percent of sample households purchased ITNs, highlighting the role of liquidity constraints in explaining earlier low adoption rates. We find mixed evidence of improvements in malaria indices. We interpret the results and their implications within the debate about cost sharing, sustainability and liquidity constraints in public health initiatives in developing countries.

  13. On predicting receptivity to surface roughness in a compressible infinite swept wing boundary layer

    NASA Astrophysics Data System (ADS)

    Thomas, Christian; Mughal, Shahid; Ashworth, Richard

    2017-03-01

    The receptivity of crossflow disturbances on an infinite swept wing is investigated using solutions of the adjoint linearised Navier-Stokes equations. The adjoint based method for predicting the magnitude of stationary disturbances generated by randomly distributed surface roughness is described, with the analysis extended to include both surface curvature and compressible flow effects. Receptivity is predicted for a broad spectrum of spanwise wavenumbers, variable freestream Reynolds numbers, and subsonic Mach numbers. Curvature is found to play a significant role in the receptivity calculations, while compressible flow effects are only found to marginally affect the initial size of the crossflow instability. A Monte Carlo type analysis is undertaken to establish the mean amplitude and variance of crossflow disturbances generated by the randomly distributed surface roughness. Mean amplitudes are determined for a range of flow parameters that are maximised for roughness distributions containing a broad spectrum of roughness wavelengths, including those that are most effective in generating stationary crossflow disturbances. A control mechanism is then developed where the short scale roughness wavelengths are damped, leading to significant reductions in the receptivity amplitude.

  14. Phage display peptide libraries: deviations from randomness and correctives

    PubMed Central

    Ryvkin, Arie; Ashkenazy, Haim; Weiss-Ottolenghi, Yael; Piller, Chen; Pupko, Tal; Gershoni, Jonathan M

    2018-01-01

    Abstract Peptide-expressing phage display libraries are widely used for the interrogation of antibodies. Affinity selected peptides are then analyzed to discover epitope mimetics, or are subjected to computational algorithms for epitope prediction. A critical assumption for these applications is the random representation of amino acids in the initial naïve peptide library. In a previous study, we implemented next generation sequencing to evaluate a naïve library and discovered severe deviations from randomness in UAG codon over-representation as well as in high G phosphoramidite abundance causing amino acid distribution biases. In this study, we demonstrate that the UAG over-representation can be attributed to the burden imposed on the phage upon the assembly of the recombinant Protein 8 subunits. This was corrected by constructing the libraries using supE44-containing bacteria which suppress the UAG driven abortive termination. We also demonstrate that the overabundance of G stems from variant synthesis-efficiency and can be corrected using compensating oligonucleotide-mixtures calibrated by mass spectroscopy. Construction of libraries implementing these correctives results in markedly improved libraries that display random distribution of amino acids, thus ensuring that enriched peptides obtained in biopanning represent a genuine selection event, a fundamental assumption for phage display applications. PMID:29420788

  15. Fast, Distributed Algorithms in Deep Networks

    DTIC Science & Technology

    2016-05-11

    may not have realized how vital she was in making this project a reality is Professor Crainiceanu. Without knowing who you were, you invited me into...objective function. Training is complete when (2) converges, or stated alternatively , when the difference between t and φL can no longer be...the state-of-the art approaches simply rely on random initialization. We propose an alternative 10 (a) Features in 1-dimensional space (b) Features

  16. Certified Reduced Basis Model Characterization: a Frequentistic Uncertainty Framework

    DTIC Science & Technology

    2011-01-11

    14) It then follows that the Legendre coefficient random vector, (Z [0], Z [1], . . . , Z [I])(ω), is (I+1)– variate normally distributed with mean (δ...I. Note each two-sided inequality represents two constraints. 3. PDE-Based Statistical Inference We now proceed to the parametrized partial...appearance of defects or geometric variations relative to an initial baseline, or perhaps manufacturing departures from nominal specifications; if our

  17. Decaying two-dimensional turbulence in a circular container.

    PubMed

    Schneider, Kai; Farge, Marie

    2005-12-09

    We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.

  18. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  19. Splash detail due to a single grain incident on a granular bed.

    PubMed

    Tanabe, Takahiro; Shimada, Takashi; Ito, Nobuyasu; Nishimori, Hiraku

    2017-02-01

    Using the discrete element method, we study the splash processes induced by the impact of a grain on a randomly packed bed. Good correspondence is obtained between our numerical results and the findings of previous experiments for the movement of ejected grains. Furthermore, the distributions of the ejection angle and ejection speed for individual grains vary depending on the relative timing at which the grains are ejected after the initial impact. Obvious differences are observed between the distributions of grains ejected during the earlier and later splash periods: the form of the vertical ejection-speed distribution varies from a power-law form to a lognormal form with time; this difference may determine grain trajectory after ejection.

  20. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    NASA Astrophysics Data System (ADS)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  1. Identification of Volcanic Landforms and Processes on Earth and Mars using Geospatial Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Fagents, S. A.; Hamilton, C. W.

    2009-12-01

    Nearest neighbor (NN) analysis enables the identification of landforms using non-morphological parameters and can be useful for constraining the geological processes contributing to observed patterns of spatial distribution. Explosive interactions between lava and water can generate volcanic rootless cone (VRC) groups that are well suited to geospatial analyses because they consist of a large number of landforms that share a common formation mechanism. We have applied NN analysis tools to quantitatively compare the spatial distribution of VRCs in the Laki lava flow in Iceland to analogous landforms in the Tartarus Colles Region of eastern Elysium Planitia, Mars. Our results show that rootless eruption sites on both Earth and Mars exhibit systematic variations in spatial organization that are related to variations in the distribution of resources (lava and water) at different scales. Field observations in Iceland reveal that VRC groups are composite structures formed by the emplacement of chronologically and spatially distinct domains. Regionally, rootless cones cluster into groups and domains, but within domains NN distances exhibit random to repelled distributions. This suggests that on regional scales VRCs cluster in locations that contain sufficient resources, whereas on local scales rootless eruption sites tend to self-organize into distributions that maximize the utilization of limited resources (typically groundwater). Within the Laki lava flow, near-surface water is abundant and pre-eruption topography appears to exert the greatest control on both lava inundation regions and clustering of rootless eruption sites. In contrast, lava thickness appears to be the controlling factor in the formation of rootless eruption sites in the Tartarus Colles Region. A critical lava thickness may be required to initiate rootless eruptions on Mars because the lava flows must contain sufficient heat for transferred thermal energy to reach the underlying cryosphere and volatilize buried ground ice. In both environments, the spatial distribution of rootless eruption sites on local scales may either be random, which indicates that rootless eruption sites form independently of one another, or repelled, which implies resource limitation. Where competition for limited groundwater causes rootless eruption sites to develop greater than random NN separation, rootless eruption sites can be modeled as a system of pumping wells that extract water from a shared aquifer, thereby generating repelled distributions due to non-initiation or early cessation of rootless explosive activity at sites with insufficient access to groundwater. Thus statistical NN analyses can be combined with field observations and remote sensing to obtain information about self-organization processes within geological systems and the effects of environmental resource limitation on the spatial distribution of volcanic landforms. NN analyses may also be used to quantitatively compare the spatial distribution of landforms in different planetary environments and for supplying non-morphological evidence to discriminate between feature identities and geological formation mechanisms.

  2. Bayesian lead time estimation for the Johns Hopkins Lung Project data.

    PubMed

    Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng

    2013-09-01

    Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  3. All about Eve: Secret Sharing using Quantum Effects

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    2005-01-01

    This document discusses the nature of light (including classical light and photons), encryption, quantum key distribution (QKD), light polarization and beamsplitters and their application to information communication. A quantum of light represents the smallest possible subdivision of radiant energy (light) and is called a photon. The QKD key generation sequence is outlined including the receiver broadcasting the initial signal indicating reception availability, timing pulses from the sender to provide reference for gated detection of photons, the sender generating photons through random polarization while the receiver detects photons with random polarization and communicating via data link to mutually establish random keys. The QKD network vision includes inter-SATCOM, point-to-point Gnd Fiber and SATCOM-fiber nodes. QKD offers an unconditionally secure method of exchanging encryption keys. Ongoing research will focus on how to increase the key generation rate.

  4. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2016-01-01

    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.

  5. A method to generate the surface cell layer of the 3D virtual shoot apex from apical initials.

    PubMed

    Kucypera, Krzysztof; Lipowczan, Marcin; Piekarska-Stachowiak, Anna; Nakielski, Jerzy

    2017-01-01

    The development of cell pattern in the surface cell layer of the shoot apex can be investigated in vivo by use of a time-lapse confocal images, showing naked meristem in 3D in successive times. However, how this layer is originated from apical initials and develops as a result of growth and divisions of their descendants, remains unknown. This is an open area for computer modelling. A method to generate the surface cell layer is presented on the example of the 3D paraboloidal shoot apical dome. In the used model the layer originates from three apical initials that meet at the dome summit and develops through growth and cell divisions under the isotropic surface growth, defined by the growth tensor. The cells, which are described by polyhedrons, divide anticlinally with the smallest division plane that passes depending on the used mode through the cell center, or the point found randomly near this center. The formation of the surface cell pattern is described with the attention being paid to activity of the apical initials and fates of their descendants. The computer generated surface layer that included about 350 cells required about 1200 divisions of the apical initials and their derivatives. The derivatives were arranged into three more or less equal clonal sectors composed of cellular clones at different age. Each apical initial renewed itself 7-8 times to produce the sector. In the shape and location and the cellular clones the following divisions of the initial were manifested. The application of the random factor resulted in more realistic cell pattern in comparison to the pure mode. The cell divisions were analyzed statistically on the top view. When all of the division walls were considered, their angular distribution was uniform, whereas in the distribution that was limited to apical initials only, some preferences related to their arrangement at the dome summit were observed. The realistic surface cell pattern was obtained. The present method is a useful tool to generate surface cell layer, study activity of initial cells and their derivatives, and how cell expansion and division are coordinated during growth. We expect its further application to clarify the question of a number and permanence or impermanence of initial cells, and possible relationship between their shape and oriented divisions, both on the ground of the growth tensor approach.

  6. Thermal effects on domain orientation of tetragonal piezoelectrics studied by in situ x-ray diffraction

    NASA Astrophysics Data System (ADS)

    Chang, Wonyoung; King, Alexander H.; Bowman, Keith J.

    2006-06-01

    Thermal effects on domain orientation in tetragonal lead zirconate titanate (PZT) and lead titanate (PT) have been investigated by using in situ x-ray diffraction with an area detector. In the case of a soft PZT, it is found that the texture parameter called multiples of a random distribution (MRD) initially increases with temperature up to approximately 100°C and then falls to unity at temperatures approaching the Curie temperature, whereas the MRD of hard PZT and PT initially undergoes a smaller increase or no change. The relationship between the mechanical strain energy and domain wall mobility with temperature is discussed.

  7. Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wu, Shaochuan; Tan, Xuezhi

    2007-11-01

    By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.

  8. Network meta-analysis of disconnected networks: How dangerous are random baseline treatment effects?

    PubMed

    Béliveau, Audrey; Goring, Sarah; Platt, Robert W; Gustafson, Paul

    2017-12-01

    In network meta-analysis, the use of fixed baseline treatment effects (a priori independent) in a contrast-based approach is regularly preferred to the use of random baseline treatment effects (a priori dependent). That is because, often, there is not a need to model baseline treatment effects, which carry the risk of model misspecification. However, in disconnected networks, fixed baseline treatment effects do not work (unless extra assumptions are made), as there is not enough information in the data to update the prior distribution on the contrasts between disconnected treatments. In this paper, we investigate to what extent the use of random baseline treatment effects is dangerous in disconnected networks. We take 2 publicly available datasets of connected networks and disconnect them in multiple ways. We then compare the results of treatment comparisons obtained from a Bayesian contrast-based analysis of each disconnected network using random normally distributed and exchangeable baseline treatment effects to those obtained from a Bayesian contrast-based analysis of their initial connected network using fixed baseline treatment effects. For the 2 datasets considered, we found that the use of random baseline treatment effects in disconnected networks was appropriate. Because those datasets were not cherry-picked, there should be other disconnected networks that would benefit from being analyzed using random baseline treatment effects. However, there is also a risk for the normality and exchangeability assumption to be inappropriate in other datasets even though we have not observed this situation in our case study. We provide code, so other datasets can be investigated. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  10. Network topology and resilience analysis of South Korean power grid

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang

    2017-01-01

    In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.

  11. Anisotropy in Fracking: A Percolation Model for Observed Microseismicity

    NASA Astrophysics Data System (ADS)

    Norris, J. Quinn; Turcotte, Donald L.; Rundle, John B.

    2015-01-01

    Hydraulic fracturing (fracking), using high pressures and a low viscosity fluid, allow the extraction of large quantiles of oil and gas from very low permeability shale formations. The initial production of oil and gas at depth leads to high pressures and an extensive distribution of natural fractures which reduce the pressures. With time these fractures heal, sealing the remaining oil and gas in place. High volume fracking opens the healed fractures allowing the oil and gas to flow to horizontal production wells. We model the injection process using invasion percolation. We use a 2D square lattice of bonds to model the sealed natural fractures. The bonds are assigned random strengths and the fluid, injected at a point, opens the weakest bond adjacent to the growing cluster of opened bonds. Our model exhibits burst dynamics in which the clusters extend rapidly into regions with weak bonds. We associate these bursts with the microseismic activity generated by fracking injections. A principal object of this paper is to study the role of anisotropic stress distributions. Bonds in the y-direction are assigned higher random strengths than bonds in the x-direction. We illustrate the spatial distribution of clusters and the spatial distribution of bursts (small earthquakes) for several degrees of anisotropy. The results are compared with observed distributions of microseismicity in a fracking injection. Both our bursts and the observed microseismicity satisfy Gutenberg-Richter frequency-size statistics.

  12. Population pharmacokinetics of valnemulin in swine.

    PubMed

    Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H

    2014-02-01

    This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.

  13. Fractality and growth of He bubbles in metals

    NASA Astrophysics Data System (ADS)

    Kajita, Shin; Ito, Atsushi M.; Ohno, Noriyasu

    2017-08-01

    Pinholes are formed on surfaces of metals by the exposure to helium plasmas, and they are regarded as the initial process of the growth of fuzzy nanostructures. In this study, number density of the pinholes is investigated in detail from the scanning electron microscope (SEM) micrographs of tungsten and tantalum exposed to the helium plasmas. A power law relation was identified between the number density and the size of pinholes. From the slope and the region where the power law was satisfied, the fractal dimension D and smin, which characterize the SEM images, are deduced. Parametric dependences and material dependence of D and smin are revealed. To explain the fractality, simple Monte-Carlo simulations including random walks of He atoms and absorption on bubble was introduced. It is shown that the initial position of the random walk is one of the key factors to deduce the fractality. The results indicated that new nucleations of bubbles are necessary to reproduce the number-density distribution of bubbles.

  14. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  15. Effects of Spatial Variability of Soil Properties on the Triggering of Rainfall-Induced Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2015-04-01

    Naturally-occurring spatial variations in soil properties (e.g., soil depth, moisture, and texture) affect key hydrological processes and potentially the mechanical response of soil to hydromechanical loading (relative to the commonly-assumed uniform soil mantle). We quantified the effects of soil spatial variability on the triggering of rainfall-induced shallow landslides at the hillslope- and catchment-scales, using a physically-based landslide triggering model that considers interacting soil columns with mechanical strength thresholds (represented by the Fiber Bundle Model). The spatial variations in soil properties are represented as Gaussian random distributions and the level of variation is characterized by the coefficient of variation and correlation lengths of soil properties (i.e., soil depth, soil texture and initial water content in this study). The impacts of these spatial variations on landslide triggering characteristics were measured by comparing the times to triggering and landslide volumes for heterogeneous soil properties and homogeneous cases. Results at hillslope scale indicate that for spatial variations of an individual property (without cross correlation), the increasing of coefficient of variation introduces weak spots where mechanical damage is accelerated and leads to earlier onset of landslide triggering and smaller volumes. Increasing spatial correlation length of soil texture and initial water content also induces early landslide triggering and small released volumes due to the transition of failure mode from brittle to ductile failure. In contrast, increasing spatial correlation length of soil depth "reduces" local steepness and postpones landslide triggering. Cross-correlated soil properties generally promote landslide initiation, but depending on the internal structure of spatial distribution of each soil property, landslide triggering may be reduced. The effects of cross-correlation between initial water content and soil texture were investigated in detail at the catchment scale by incorporating correlations of both variables with topography. Results indicate that the internal structure of the spatial distribution of each soil property together with their interplays determine the overall performance of the coupled spatial variability. This study emphasizes the importance of both the randomness and spatial structure of soil properties on landslide triggering and characteristics.

  16. Stochastic resetting in backtrack recovery by RNA polymerases

    NASA Astrophysics Data System (ADS)

    Roldán, Édgar; Lisica, Ana; Sánchez-Taltavull, Daniel; Grill, Stephan W.

    2016-06-01

    Transcription is a key process in gene expression, in which RNA polymerases produce a complementary RNA copy from a DNA template. RNA polymerization is frequently interrupted by backtracking, a process in which polymerases perform a random walk along the DNA template. Recovery of polymerases from the transcriptionally inactive backtracked state is determined by a kinetic competition between one-dimensional diffusion and RNA cleavage. Here we describe backtrack recovery as a continuous-time random walk, where the time for a polymerase to recover from a backtrack of a given depth is described as a first-passage time of a random walker to reach an absorbing state. We represent RNA cleavage as a stochastic resetting process and derive exact expressions for the recovery time distributions and mean recovery times from a given initial backtrack depth for both continuous and discrete-lattice descriptions of the random walk. We show that recovery time statistics do not depend on the discreteness of the DNA lattice when the rate of one-dimensional diffusion is large compared to the rate of cleavage.

  17. The Distribution of Ice in Lunar Permanently Shadowed Regions: Science Enabling Exploration (Invited)

    NASA Astrophysics Data System (ADS)

    Hurley, D.; Elphic, R. C.; Bussey, B.; Hibbitts, C.; Lawrence, D. J.

    2013-12-01

    Recent prospecting indicates that water ice occurs in enhanced abundances in some lunar PSRs. That water constitutes a resource that enables lunar exploration if it can be harvested for fuel and life support. Future lunar exploration missions will need detailed information about the distribution of volatiles in lunar permanently shadowed regions (PSRs). In addition, the volatiles also offer key insights into the recent and distant past, as they have trapped volatiles delivered to the moon over ~2 Gyr. This comprises an unparalleled reservoir of past inner solar system volatiles, and future scientific missions are needed to make the measurements that will reveal the composition of those volatiles. These scientific missions will necessarily have to acquire and analyze samples of volatiles from the PSRs. For both exploration and scientific purposes, the precise location of volatiles will need to be known. However, data indicate that ice is distributed heterogeneously on the Moon. It is unlikely that the distribution will be known a priori with enough spatial resolution to guarantee access to volatiles using a single point sample. Some mechanism for laterally or vertically distributed access will increase the likelihood of acquiring a rich sample of volatiles. Trade studies will need to be conducted to anticipate the necessary range and duration of missions to lunar PSRs that will be needed to accomplish the mission objectives. We examine the spatial distribution of volatiles in lunar PSRs reported from data analyses and couple those with models of smaller scale processes. FUV and laser data from PSRs that indicate the average surface distribution is consistent with low abundances on the extreme surface in most PSRs. Neutron and radar data that probe the distribution at depth show heterogeneity at broad spatial resolution. We consider those data in conjunction with the model to understand the full, 3-D nature of the heterogeneity. A Monte Carlo technique simulates the stochastic process of impact gardening on a putative ice deposit. The model uses the crater production function as a basis for generating a random selection of impact craters over time. Impacts are implemented by modifying the topography, volatile content, and depth distribution in the simulation volume on a case by case basis. This technique will never be able to reproduce the exact impact history of a particular area. But by conducting multiple runs with the same initial conditions and a different seed to the random number generator, we are able to calculate the probability of situations occurring. Further, by repeating the simulations with varied initial conditions, we calculate the dependence of the expectation values on the inputs. We present findings regarding the heterogeneity of volatiles in PSRs as a function of age, initial ice thickness, and contributions from steady sources.

  18. Brain Activity in Fairness Consideration during Asset Distribution: Does the Initial Ownership Play a Role?

    PubMed Central

    Wu, Yin; Hu, Jie; van Dijk, Eric; Leliveld, Marijke C.; Zhou, Xiaolin

    2012-01-01

    Previous behavioral studies have shown that initial ownership influences individuals’ fairness consideration and other-regarding behavior. However, it is not entirely clear whether initial ownership influences the brain activity when a recipient evaluates the fairness of asset distribution. In this study, we randomly assigned the bargaining property (monetary reward) to either the allocator or the recipient in the ultimatum game and let participants of the study, acting as recipients, receive either disadvantageous unequal, equal, or advantageous unequal offers from allocators while the event-related potentials (ERPs) were recorded. Behavioral results showed that participants were more likely to reject disadvantageous unequal and equal offers when they initially owned the property as compared to when they did not. The two types of unequal offers evoked more negative going ERPs (the MFN) than the equal offers in an early time window and the differences were not modulated by the initial ownership. In a late time window, however, the P300 responses to division schemes were affected not only by the type of unequal offers but also by whom the property was initially assigned to. These findings suggest that while the MFN may function as a general mechanism that evaluates whether the offer is consistent or inconsistent with the equity rule, the P300 is sensitive to top-down controlled processes, into which factors related to the allocation of attentional resources, including initial ownership and personal interests, come to play. PMID:22761850

  19. Hot-Jupiter Breakfasts Realign Stars

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2015-08-01

    Two researchers at the University of Chicago have recently developed a new theory to explain an apparent dichotomy in the orbits of planets around cool vs. hot stars. Their model proposes that the spins of cool stars are affected when they ingest hot Jupiters (HJs) early in their stellar lifetimes. A Puzzling Dichotomy: In exoplanet studies, there is a puzzling difference observed between planet orbits around cool and hot (those with Teff ≥ 6250 K) stars: the orbital planes of planets around cool stars are primarily aligned with the host star's spin, whereas the orbital planes of planets around hot stars seem to be randomly distributed. Previous attempts to explain this dichotomy have focused on tidal interactions between the host star and the planets observed in the system. Now Titos Matsakos and Arieh Königl have taken these models a step further — by including in their calculations not only the effects of observed planets, but also those of HJs that may have been swallowed by the star long before we observed the systems. Modeling Meals: Plots of the distribution of the obliquity λ for hot Jupiters around cool hosts (upper plot) and hot hosts (lower plot). The dashed line shows the initial distribution, the bins show the model prediction for the final distribution after the systems evolve, and the black dots show the current observational data. [Matsakos & Königl, 2015]" class="size-thumbnail wp-image-223" height="386" src="http://aasnova.org/wp-content/uploads/2015/08/fig22-260x386.png" width="260" /> Plots of the distribution of the obliquity λ for hot Jupiters around cool hosts (upper plot) and hot hosts (lower plot). The dashed line shows the initial distribution, the bins show the model prediction for the final distribution after the systems evolve, and the black dots show the current observational data. [Matsakos & Königl, 2015] The authors' model assumes that as HJs are formed and migrate inward through the protoplanetary disk, they stall out near the star (where they have periods of ~2 days) and get stranded as the gas disk evaporates around them. Tidal interactions can cause these planets to become ingested by the host star within 1 Gyr. Using Monte Carlo simulations, the authors model these star-planet tidal interactions and evolve a total of 10^6 systems: half with hot (Teff = 6400 K), main-sequence hosts, and half with cool (Teff = 5500 K), solar-type hosts. The initial obliquities — the angle between the stellar spin and the planets' orbital angular momentum vectors — are randomly distributed between 0° and 180°. The authors find that early stellar ingestion of planets might be very common: to match observations, roughly half of all stellar hosts must ingest an HJ early in their lifetimes! This scenario results in a good match with observational data: about 50% of cool hosts' spins become roughly aligned with the orbital plane of their planets after they absorb the orbital angular momentum of the HJ they ingest. Hot stars, on the other hand, generally retain their random distributions of obliquity, because their angular momentum is typically higher than the orbital angular momentum of the ingested planet. Citation: Titos Matsakos and Arieh Königl 2015, ApJ, 809, L20. doi: 10.1088/2041-8205/809/2/L20

  20. Limits on relief through constrained exchange on random graphs

    NASA Astrophysics Data System (ADS)

    LaViolette, Randall A.; Ellebracht, Lory A.; Gieseler, Charles J.

    2007-09-01

    Agents are represented by nodes on a random graph (e.g., “small world”). Each agent is endowed with a zero-mean random value that may be either positive or negative. All agents attempt to find relief, i.e., to reduce the magnitude of that initial value, to zero if possible, through exchanges. The exchange occurs only between the agents that are linked, a constraint that turns out to dominate the results. The exchange process continues until Pareto equilibrium is achieved. Only 40-90% of the agents achieved relief on small-world graphs with mean degree between 2 and 40. Even fewer agents achieved relief on scale-free-like graphs with a truncated power-law degree distribution. The rate at which relief grew with increasing degree was slow, only at most logarithmic for all of the graphs considered; viewed in reverse, the fraction of nodes that achieve relief is resilient to the removal of links.

  1. Stochastic analysis of a pulse-type prey-predator model

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Zhu, W. Q.

    2008-04-01

    A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.

  2. Stochastic analysis of a pulse-type prey-predator model.

    PubMed

    Wu, Y; Zhu, W Q

    2008-04-01

    A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.

  3. A Randomized Effectiveness Trial of a Systems-Level Approach to Stepped Care for War-Related PTSD

    DTIC Science & Technology

    2016-05-01

    period rather than storing the hard copies at their respective posts was approved. Also, an amendment changing the study Initiating PI from COL...care is the de facto mental health system; in Collaborative Medicine Case Studies : Evidence in Prac- tice. Edited by Kessler R, Stafford D. New York...Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT During the 6.5 year study period, investigators developed the STEPS UP

  4. United States Air Force Summer Faculty Research Program (1987). Program Technical Report. Volume 2.

    DTIC Science & Technology

    1987-12-01

    the area of statistical inference, distribution theory and stochastic * •processes. I have taught courses in random processes and sample % j .functions...controlled phase separation of isotropic, binary mixtures, the theory of spinodal decomposition has been developed by Cahn and Hilliard.5 ,6 This theory is...peak and its initial rate of growth at a given temperature are predicted by the spinodal theory . The angle of maximum intensity is then determined by

  5. An in silico investigation into the causes of telomere length heterogeneity and its implications for the Hayflick limit.

    PubMed

    Golubev, A; Khrustalev, S; Butov, A

    2003-11-21

    In telomerase-negative cell populations the mean telomere length (TL) decreases with increasing population doubling number (PD). A critically small TL is believed to stop cell proliferation at a cell-, age- and species-specific PD thus defining the Hayflick limit. However, positively skewed TL distributions are broad compared to differences between initial and final mean TL and strongly overlap at middle and late PD, which is inconsistent with a limiting role of TL. We used computer-assisted modelling to define what set of premises may account for the above. Our model incorporates the following concepts. DNA end replication problem: telomeres loose 1 shortening unit (SU) upon each cell division. Free radical-caused TL decrease: telomeres experience random events resulting in the loss of a random SU number within a remaining TL. Stochasticity of gene expression and cell differentiation: cells experience random events inducing mitoses or committing cells to proliferation arrest, the latter option requiring a specified number of mitoses to be passed. Cells whose TL reaches 1SU cannot divide. The proliferation kinetics of such virtual cells conforms to the transition probability model of cell cycle. When no committing events occur and at realistic SU estimates of the initial TL, maximal PD values far exceed the Hayflick limit observed in normal cells and are consistent with the crisis stage entered by transformed cells that have surpassed the Hayflick limit. At intermediate PD, symmetrical TL distributions are yielded. Upon introduction of committing events making the ratio of the rates of proliferating and committing events (P/C) range from 1.10 to 1.25, TL distributions at intermediate PD become positively skewed, and virtual cell clones show bimodal size distributions. At P/C as high as 1.25 the majority of virtual cells at maximal PD contain telomeres with TL>1SU. A 10% increase in P/C within the 1.10-1.25 range produces a two-fold increase in the maximal PD, which can reach values of up to 25 observed in rodent and some human cells. Increasing the number of committed mitoses from 0 to 10 can increases PD to about 50 observed in human fibroblasts. Introduction of the random TL breakage makes the shapes of TL distributions quite dissimilar from those observed in real cells. Telomere length decrease is a correlate of cell proliferation that cannot alone account for the Hayflick limit, which primarily depends on parameters of cell population kinetics. Free radical damage influences the Hayflick limit not through TL but rather by affecting the ratio of the rates of events that commit cells to mitoses or to proliferation arrest.

  6. Cascading failures in complex networks with community structure

    NASA Astrophysics Data System (ADS)

    Lin, Guoqiang; di, Zengru; Fan, Ying

    2014-12-01

    Much empirical evidence shows that when attacked with cascading failures, scale-free or even random networks tend to collapse more extensively when the initially deleted node has higher betweenness. Meanwhile, in networks with strong community structure, high-betweenness nodes tend to be bridge nodes that link different communities, and the removal of such nodes will reduce only the connections among communities, leaving the networks fairly stable. Understanding what will affect cascading failures and how to protect or attack networks with strong community structure is therefore of interest. In this paper, we have constructed scale-free Community Networks (SFCN) and Random Community Networks (RCN). We applied these networks, along with the Lancichinett-Fortunato-Radicchi (LFR) benchmark, to the cascading-failure scenario to explore their vulnerability to attack and the relationship between cascading failures and the degree distribution and community structure of a network. The numerical results show that when the networks are of a power-law distribution, a stronger community structure will result in the failure of fewer nodes. In addition, the initial removal of the node with the highest betweenness will not lead to the worst cascading, i.e. the largest avalanche size. The Betweenness Overflow (BOF), an index that we developed, is an effective indicator of this tendency. The RCN, however, display a different result. In addition, the avalanche size of each node can be adopted as an index to evaluate the importance of the node.

  7. Monte Carlo computer simulations of Venus equilibrium and global resurfacing models

    NASA Technical Reports Server (NTRS)

    Dawson, D. D.; Strom, R. G.; Schaber, G. G.

    1992-01-01

    Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.

  8. Scalar mixtures in porous media

    NASA Astrophysics Data System (ADS)

    Kree, Mihkel; Villermaux, Emmanuel

    2017-10-01

    Using a technique allowing for in situ measurements of concentrations fields, the evolution of scalar mixtures flowing within a porous medium made of a three-dimensional random stack of solid spheres, is addressed. Two distinct fluorescent dyes are injected from separate sources. Their evolution as they disperse and mix through the medium is directly observed and quantified, which is made possible by matching the refractive indices of the spheres and the flowing interstitial liquid. We decipher the nature of the interaction rule between the scalar sources, explaining the phenomenon that alters the concentration distribution of the overall mixture as it decays toward uniformity. Any residual correlation of the initially merged sources is progressively hidden, leading to an effective fully random interaction rule of the two distinct subfields.

  9. Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Haisheng; Xu, Rui; Chen, Huaping

    2018-04-01

    To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.

  10. Neural Predictors of Initiating Alcohol Use During Adolescence.

    PubMed

    Squeglia, Lindsay M; Ball, Tali M; Jacobus, Joanna; Brumback, Ty; McKenna, Benjamin S; Nguyen-Louie, Tam T; Sorg, Scott F; Paulus, Martin P; Tapert, Susan F

    2017-02-01

    Underage drinking is widely recognized as a leading public health and social problem for adolescents in the United States. Being able to identify at-risk adolescents before they initiate heavy alcohol use could have important clinical and public health implications; however, few investigations have explored individual-level precursors of adolescent substance use. This prospective investigation used machine learning with demographic, neurocognitive, and neuroimaging data in substance-naive adolescents to identify predictors of alcohol use initiation by age 18. Participants (N=137) were healthy substance-naive adolescents (ages 12-14) who underwent neuropsychological testing and structural and functional magnetic resonance imaging (sMRI and fMRI), and then were followed annually. By age 18, 70 youths (51%) initiated moderate to heavy alcohol use, and 67 remained nonusers. Random forest classification models identified the most important predictors of alcohol use from a large set of demographic, neuropsychological, sMRI, and fMRI variables. Random forest models identified 34 predictors contributing to alcohol use by age 18, including several demographic and behavioral factors (being male, higher socioeconomic status, early dating, more externalizing behaviors, positive alcohol expectancies), worse executive functioning, and thinner cortices and less brain activation in diffusely distributed regions of the brain. Incorporating a mix of demographic, behavioral, neuropsychological, and neuroimaging data may be the best strategy for identifying youths at risk for initiating alcohol use during adolescence. The identified risk factors will be useful for alcohol prevention efforts and in research to address brain mechanisms that may contribute to early drinking.

  11. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  12. A simple model of global cascades on random networks

    NASA Astrophysics Data System (ADS)

    Watts, Duncan J.

    2002-04-01

    The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascadesherein called global cascadesthat occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu

    In the landscape perspective, our Universe begins with a quantum tunneling from an eternally-inflating parent vacuum, followed by a period of slow-roll inflation. We investigate the tunneling process and calculate the probability distribution for the initial conditions and for the number of e-folds of slow-roll inflation, modeling the landscape by a small-field one-dimensional random Gaussian potential. We find that such a landscape is fully consistent with observations, but the probability for future detection of spatial curvature is rather low, P ∼ 10{sup −3}.

  14. Threshold quantum state sharing based on entanglement swapping

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Tso, Raylin

    2018-06-01

    A threshold quantum state sharing scheme is proposed. The dealer uses the quantum-controlled-not operations to expand the d-dimensional quantum state and then uses the entanglement swapping to distribute the state to a random subset of participants. The participants use the single-particle measurements and unitary operations to recover the initial quantum state. In our scheme, the dealer can share different quantum states among different subsets of participants simultaneously. So the scheme will be very flexible in practice.

  15. Distribution of breakage events in random packings of rodlike particles.

    PubMed

    Grof, Zdeněk; Štěpánek, František

    2013-07-01

    Uniaxial compaction and breakage of rodlike particle packing has been studied using a discrete element method simulation. A scaling relationship between the applied stress, the number of breakage events, and the number-mean particle length has been derived and compared with computational experiments. Based on results for a wide range of intrinsic particle strengths and initial particle lengths, it seems that a single universal relation can be used to describe the incidence of breakage events during compaction of rodlike particle layers.

  16. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  17. Microfracture spacing distributions and the evolution of fracture patterns in sandstones

    NASA Astrophysics Data System (ADS)

    Hooker, J. N.; Laubach, S. E.; Marrett, R.

    2018-03-01

    Natural fracture patterns in sandstone were sampled using scanning electron microscope-based cathodoluminescence (SEM-CL) imaging. All fractures are opening-mode and are fully or partially sealed by quartz cement. Most sampled fractures are too small to be height-restricted by sedimentary layers. At very low strains (<∼0.001), fracture spatial distributions are indistinguishable from random, whereas at higher strains, fractures are generally statistically clustered. All 12 large (N > 100) datasets show spacings that are best fit by log-normal size distributions, compared to exponential, power law, or normal distributions. The clustering of fractures suggests that the locations of natural factures are not determined by a random process. To investigate natural fracture localization, we reconstructed the opening history of a cluster of fractures within the Huizachal Group in northeastern Mexico, using fluid inclusions from synkinematic cements and thermal-history constraints. The largest fracture, which is the only fracture in the cluster visible to the naked eye, among 101 present, opened relatively late in the sequence. This result suggests that the growth of sets of fractures is a self-organized process, in which small, initially isolated fractures grow and progressively interact, with preferential growth of a subset of fractures developing at the expense of growth of the rest. Size-dependent sealing of fractures within sets suggests that synkinematic cementation may contribute to fracture clustering.

  18. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  19. Fluorescence Excitation Spectroscopy for Phytoplankton Species Classification Using an All-Pairs Method: Characterization of a System with Unexpectedly Low Rank.

    PubMed

    Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L

    2018-03-01

    An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.

  20. New techniques for modeling the reliability of reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.

    1985-12-01

    In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel walls thickness, and fluence distributions that vary through-out the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper. Themore » effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithm for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RTNDT. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest thoughness with subsequent initiation toughnesses. 21 refs.« less

  1. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  2. Political opinion formation: Initial opinion distribution and individual heterogeneity of tolerance

    NASA Astrophysics Data System (ADS)

    Jin, Cheng; Li, Yifu; Jin, Xiaogang

    2017-02-01

    Opinion dynamics on networks have received serious attention for its profound prospects in social behaviours and self-organized systems. However, political opinion formation, as one typical and significant case, remains lacking in discussion. Previous agent-based simulations propose various models that are based on different mechanisms like the coevolution between network topology and status transition. Nonetheless, even under the same network topology and with the same simple mechanism, forming opinions can still be uncertain. In this work, we propose two features, the initial distribution of opinions and the individual heterogeneity of tolerances on opinion changing, in political opinion formation. These two features are imbedded in the network construction phase of a classical model. By comparing multi simple-party systems, along with a detailed analysis on the two-party system, we capture the critical phenomenon of fragmentation, polarization and consensus both in the persistent stable stage and in-process. We further introduce the average ratio of nearest neighbours to characterize the stage of opinion formation. The results show that the initial distribution of opinions leads to different evolution results on similar random networks. In addition, the existence of stubborn nodes plays a special role: only nodes that are extremely stubborn can cause the change of final opinion distribution while in other cases they only delay the time to reach stability. If stubborn nodes are small in number, their effects are confined within a small range. This theoretical work goes deeper on an existing model, it is an early exploration on qualitative and quantitative simulation of party competition.

  3. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  4. Randomly biased investments and the evolution of public goods on interdependent networks

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Wu, Te; Li, Zhiwu; Wang, Long

    2017-08-01

    Deciding how to allocate resources between interdependent systems is significant to optimize efficiency. We study the effects of heterogeneous contribution, induced by such interdependency, on the evolution of cooperation, through implementing the public goods games on two-layer networks. The corresponding players on different layers try to share a fixed amount of resources as the initial investment properly. The symmetry breaking of investments between players located on different layers is able to either prevent investments from, or extract them out of the deadlock. Results show that a moderate investment heterogeneity is best favorable for the evolution of cooperation, and random allocation of investment bias suppresses the cooperators at a wide range of the investment bias and the enhancement effect. Further studies on time evolution with different initial strategy configurations show that the non-interdependent cooperators along the interface of interdependent cooperators also are an indispensable factor in facilitating cooperative behavior. Our main results are qualitatively unchanged even diversifying investment bias that is subject to uniform distribution. Our study may shed light on the understanding of the origin of cooperative behavior on interdependent networks.

  5. Specifying initial stress for dynamic heterogeneous earthquake source models

    USGS Publications Warehouse

    Andrews, D.J.; Barall, M.

    2011-01-01

    Dynamic rupture calculations using heterogeneous stress drop that is random and self-similar with a power-law spatial spectrum have great promise of producing realistic ground-motion predictions. We present procedures to specify initial stress for random events with a target rupture length and target magnitude. The stress function is modified in the depth dimension to account for the brittle-ductile transition at the base of the seismogenic zone. Self-similar fluctuations in stress drop are tied in this work to the long-wavelength stress variation that determines rupture length. Heterogeneous stress is related to friction levels in order to relate the model to physical concepts. In a variant of the model, there are high-stress asperities with low background stress. This procedure has a number of advantages: (1) rupture stops naturally, not at artificial barriers; (2) the amplitude of short-wavelength fluctuations of stress drop is not arbitrary: the spectrum is fixed to the long-wavelength fluctuation that determines rupture length; and (3) large stress drop can be confined to asperities occupying a small fraction of the total rupture area, producing slip distributions with enhanced peaks.

  6. Simulating star clusters with the AMUSE software framework. I. Dependence of cluster lifetimes on model assumptions and cluster dissolution modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, Alfred J.; McMillan, Stephen L. W.; Vesperini, Enrico

    2013-12-01

    We perform a series of simulations of evolving star clusters using the Astrophysical Multipurpose Software Environment (AMUSE), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions and contain a tidal cutoff. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After studying and understanding that the differences between AMUSE results and results from previous studies are understood, we explored the variation in cluster lifetimes due to the random realization noisemore » introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a runaway cluster dissolution with a sudden loss of mass, and a dissolution mode that does not contain this feature. We refer to these dissolution modes as 'dynamical' and 'relaxation' dominated, respectively. For Salpeter-like initial mass functions, we determined the boundary between these two modes in terms of the dynamical and relaxation timescales.« less

  7. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  8. Coverage maximization under resource constraints using a nonuniform proliferating random walk.

    PubMed

    Saha, Sudipta; Ganguly, Niloy

    2013-02-01

    Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.

  9. Random deposition of particles of different sizes.

    PubMed

    Forgerini, F L; Figueiredo, W

    2009-04-01

    We study the surface growth generated by the random deposition of particles of different sizes. A model is proposed where the particles are aggregated on an initially flat surface, giving rise to a rough interface and a porous bulk. By using Monte Carlo simulations, a surface has grown by adding particles of different sizes, as well as identical particles on the substrate in (1+1) dimensions. In the case of deposition of particles of different sizes, they are selected from a Poisson distribution, where the particle sizes may vary by 1 order of magnitude. For the deposition of identical particles, only particles which are larger than one lattice parameter of the substrate are considered. We calculate the usual scaling exponents: the roughness, growth, and dynamic exponents alpha, beta, and z, respectively, as well as, the porosity in the bulk, determining the porosity as a function of the particle size. The results of our simulations show that the roughness evolves in time following three different behaviors. The roughness in the initial times behaves as in the random deposition model. At intermediate times, the surface roughness grows slowly and finally, at long times, it enters into the saturation regime. The bulk formed by depositing large particles reveals a porosity that increases very fast at the initial times and also reaches a saturation value. Excepting the case where particles have the size of one lattice spacing, we always find that the surface roughness and porosity reach limiting values at long times. Surprisingly, we find that the scaling exponents are the same as those predicted by the Villain-Lai-Das Sarma equation.

  10. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  11. Links between fear of humans, stress and survival support a non-random distribution of birds among urban and rural habitats

    PubMed Central

    Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A.; Bortolotti, Gary R.; Tella, José L.

    2015-01-01

    Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes. PMID:26348294

  12. Links between fear of humans, stress and survival support a non-random distribution of birds among urban and rural habitats.

    PubMed

    Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A; Bortolotti, Gary R; Tella, José L

    2015-09-08

    Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes.

  13. Ocean biogeochemistry modeled with emergent trait-based genomics

    NASA Astrophysics Data System (ADS)

    Coles, V. J.; Stukel, M. R.; Brooks, M. T.; Burd, A.; Crump, B. C.; Moran, M. A.; Paul, J. H.; Satinsky, B. M.; Yager, P. L.; Zielinski, B. L.; Hood, R. R.

    2017-12-01

    Marine ecosystem models have advanced to incorporate metabolic pathways discovered with genomic sequencing, but direct comparisons between models and “omics” data are lacking. We developed a model that directly simulates metagenomes and metatranscriptomes for comparison with observations. Model microbes were randomly assigned genes for specialized functions, and communities of 68 species were simulated in the Atlantic Ocean. Unfit organisms were replaced, and the model self-organized to develop community genomes and transcriptomes. Emergent communities from simulations that were initialized with different cohorts of randomly generated microbes all produced realistic vertical and horizontal ocean nutrient, genome, and transcriptome gradients. Thus, the library of gene functions available to the community, rather than the distribution of functions among specific organisms, drove community assembly and biogeochemical gradients in the model ocean.

  14. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  15. An interactive control algorithm used for equilateral triangle formation with robotic sensors.

    PubMed

    Li, Xiang; Chen, Hongcai

    2014-04-22

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs.

  16. Disruption of an Aligned Dendritic Network by Bubbles During Re-Melting in a Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.; Brush, Lucien N.; Anilkumar, Amrutur V.

    2012-01-01

    The quiescent Microgravity environment can be quite dynamic. Thermocapillary flow about "large" static bubbles on the order of 1mm in diameter was easily observed by following smaller tracer bubbles. The bubble induced flow was seen to disrupt a large dendritic array, effectively distributing free branches about the solid-liquid interface. "Small" dynamic bubbles were observed to travel at fast velocities through the mushy zone with the implication of bringing/detaching/redistributing dendrite arm fragments at the solid-liquid interface. Large and small bubbles effectively re-orient/re-distribute dendrite branches/arms/fragments at the solid liquid interface. Subsequent initiation of controlled directional solidification results in growth of dendrites having random orientations which significantly compromises the desired science.

  17. An Interactive Control Algorithm Used for Equilateral Triangle Formation with Robotic Sensors

    PubMed Central

    Li, Xiang; Chen, Hongcai

    2014-01-01

    This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs. PMID:24759118

  18. Inequality and visibility of wealth in experimental social networks.

    PubMed

    Nishi, Akihiro; Shirado, Hirokazu; Rand, David G; Christakis, Nicholas A

    2015-10-15

    Humans prefer relatively equal distributions of resources, yet societies have varying degrees of economic inequality. To investigate some of the possible determinants and consequences of inequality, here we perform experiments involving a networked public goods game in which subjects interact and gain or lose wealth. Subjects (n = 1,462) were randomly assigned to have higher or lower initial endowments, and were embedded within social networks with three levels of economic inequality (Gini coefficient = 0.0, 0.2, and 0.4). In addition, we manipulated the visibility of the wealth of network neighbours. We show that wealth visibility facilitates the downstream consequences of initial inequality-in initially more unequal situations, wealth visibility leads to greater inequality than when wealth is invisible. This result reflects a heterogeneous response to visibility in richer versus poorer subjects. We also find that making wealth visible has adverse welfare consequences, yielding lower levels of overall cooperation, inter-connectedness, and wealth. High initial levels of economic inequality alone, however, have relatively few deleterious welfare effects.

  19. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  20. Evaluating the Use of Random Distribution Theory to Introduce Statistical Inference Concepts to Business Students

    ERIC Educational Resources Information Center

    Larwin, Karen H.; Larwin, David A.

    2011-01-01

    Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…

  1. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  2. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  3. Fidelity decay in interacting two-level boson systems: Freezing and revivals

    NASA Astrophysics Data System (ADS)

    Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.

    2011-05-01

    We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.

  4. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    NASA Astrophysics Data System (ADS)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  5. Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network

    DTIC Science & Technology

    2013-05-26

    public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University

  6. Synaptic Impairment and Robustness of Excitatory Neuronal Networks with Different Topologies

    PubMed Central

    Mirzakhalili, Ehsan; Gourgou, Eleni; Booth, Victoria; Epureanu, Bogdan

    2017-01-01

    Synaptic deficiencies are a known hallmark of neurodegenerative diseases, but the diagnosis of impaired synapses on the cellular level is not an easy task. Nonetheless, changes in the system-level dynamics of neuronal networks with damaged synapses can be detected using techniques that do not require high spatial resolution. This paper investigates how the structure/topology of neuronal networks influences their dynamics when they suffer from synaptic loss. We study different neuronal network structures/topologies by specifying their degree distributions. The modes of the degree distribution can be used to construct networks that consist of rich clubs and resemble small world networks, as well. We define two dynamical metrics to compare the activity of networks with different structures: persistent activity (namely, the self-sustained activity of the network upon removal of the initial stimulus) and quality of activity (namely, percentage of neurons that participate in the persistent activity of the network). Our results show that synaptic loss affects the persistent activity of networks with bimodal degree distributions less than it affects random networks. The robustness of neuronal networks enhances when the distance between the modes of the degree distribution increases, suggesting that the rich clubs of networks with distinct modes keep the whole network active. In addition, a tradeoff is observed between the quality of activity and the persistent activity. For a range of distributions, both of these dynamical metrics are considerably high for networks with bimodal degree distribution compared to random networks. We also propose three different scenarios of synaptic impairment, which may correspond to different pathological or biological conditions. Regardless of the network structure/topology, results demonstrate that synaptic loss has more severe effects on the activity of the network when impairments are correlated with the activity of the neurons. PMID:28659765

  7. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  8. The Evolution of Grain Size Distribution in Explosive Rock Fragmentation - Sequential Fragmentation Theory Revisited

    NASA Astrophysics Data System (ADS)

    Scheu, B.; Fowler, A. C.

    2015-12-01

    Fragmentation is a ubiquitous phenomenon in many natural and engineering systems. It is the process by which an initially competent medium, solid or liquid, is broken up into a population of constituents. Examples occur in collisions and impacts of asteroids/meteorites, explosion driven fragmentation of munitions on a battlefield, as well as of magma in a volcanic conduit causing explosive volcanic eruptions and break-up of liquid drops. Besides the mechanism of fragmentation the resulting frequency-size distribution of the generated constituents is of central interest. Initially their distributions were fitted empirically using lognormal, Rosin-Rammler and Weibull distributions (e.g. Brown & Wohletz 1995). The sequential fragmentation theory (Brown 1989, Wohletz at al. 1989, Wohletz & Brown 1995) and the application of fractal theory to fragmentation products (Turcotte 1986, Perfect 1997, Perugini & Kueppers 2012) attempt to overcome this shortcoming by providing a more physical basis for the applied distribution. Both rely on an at least partially scale-invariant and thus self-similar random fragmentation process. Here we provide a stochastic model for the evolution of grain size distribution during the explosion process. Our model is based on laboratory experiments in which volcanic rock samples explode naturally when rapidly depressurized from initial pressures of several MPa to ambient conditions. The physics governing this fragmentation process has been successfully modelled and the observed fragmentation pattern could be numerically reproduced (Fowler et al. 2010). The fragmentation of these natural rocks leads to grain size distributions which vary depending on the experimental starting conditions. Our model provides a theoretical description of these different grain size distributions. Our model combines a sequential model of the type outlined by Turcotte (1986), but generalized to cater for the explosive process appropriate here, in particular by including in the description of the fracturing events in which the rock fragments, with a recipe for the production of fines, as observed in the experiments. To our knowledge, this implementation of a deterministic fracturing process into a stochastic (sequential) model is unique, further it provides the model with some forecasting power.

  9. Random deflections of a string on an elastic foundation.

    NASA Technical Reports Server (NTRS)

    Sanders, J. L., Jr.

    1972-01-01

    The paper is concerned with the problem of a taut string on a random elastic foundation subjected to random loads. The boundary value problem is transformed into an initial value problem by the method of invariant imbedding. Fokker-Planck equations for the random initial value problem are formulated and solved in some special cases. The analysis leads to a complete characterization of the random deflection function.

  10. Ultrashort pulse chirp measurement via transverse second-harmonic generation in strontium barium niobate crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trull, J.; Wang, B.; Parra, A.

    2015-06-01

    Pulse compression in dispersive strontium barium niobate crystal with a random size and distribution of the anti-parallel orientated nonlinear domains is observed via transverse second harmonic generation. The dependence of the transverse width of the second harmonic trace along the propagation direction allows for the determination of the initial chirp and duration of pulses in the femtosecond regime. This technique permits a real-time analysis of the pulse evolution and facilitates fast in-situ correction of pulse chirp acquired in the propagation through an optical system.

  11. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  12. ON THE TIDAL DISSIPATION OF OBLIQUITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, T. M.; Lin, D. N. C., E-mail: tami@lpl.arizona.edu, E-mail: lin@ucolick.org

    2013-05-20

    We investigate tidal dissipation of obliquity in hot Jupiters. Assuming an initial random orientation of obliquity and parameters relevant to the observed population, the obliquity of hot Jupiters does not evolve to purely aligned systems. In fact, the obliquity evolves to either prograde, retrograde, or 90 Degree-Sign orbits where the torque due to tidal perturbations vanishes. This distribution is incompatible with observations which show that hot Jupiters around cool stars are generally aligned. This calls into question the viability of tidal dissipation as the mechanism for obliquity alignment of hot Jupiters around cool stars.

  13. Random walk and graph cut based active contour model for three-dimension interactive pituitary adenoma segmentation from MR images

    NASA Astrophysics Data System (ADS)

    Sun, Min; Chen, Xinjian; Zhang, Zhiqiang; Ma, Chiyuan

    2017-02-01

    Accurate volume measurements of pituitary adenoma are important to the diagnosis and treatment for this kind of sellar tumor. The pituitary adenomas have different pathological representations and various shapes. Particularly, in the case of infiltrating to surrounding soft tissues, they present similar intensities and indistinct boundary in T1-weighted (T1W) magnetic resonance (MR) images. Then the extraction of pituitary adenoma from MR images is still a challenging task. In this paper, we propose an interactive method to segment the pituitary adenoma from brain MR data, by combining graph cuts based active contour model (GCACM) and random walk algorithm. By using the GCACM method, the segmentation task is formulated as an energy minimization problem by a hybrid active contour model (ACM), and then the problem is solved by the graph cuts method. The region-based term in the hybrid ACM considers the local image intensities as described by Gaussian distributions with different means and variances, expressed as maximum a posteriori probability (MAP). Random walk is utilized as an initialization tool to provide initialized surface for GCACM. The proposed method is evaluated on the three-dimensional (3-D) T1W MR data of 23 patients and compared with the standard graph cuts method, the random walk method, the hybrid ACM method, a GCACM method which considers global mean intensity in region forces, and a competitive region-growing based GrowCut method planted in 3D Slicer. Based on the experimental results, the proposed method is superior to those methods.

  14. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  15. Disruption of Transcriptional Coactivator Sub1 Leads to Genome-Wide Re-distribution of Clustered Mutations Induced by APOBEC in Active Yeast Genes

    PubMed Central

    Dhar, Alok; Polev, Dmitrii E.; Masharsky, Alexey E.; Rogozin, Igor B.; Pavlov, Youri I.

    2015-01-01

    Mutations in genomes of species are frequently distributed non-randomly, resulting in mutation clusters, including recently discovered kataegis in tumors. DNA editing deaminases play the prominent role in the etiology of these mutations. To gain insight into the enigmatic mechanisms of localized hypermutagenesis that lead to cluster formation, we analyzed the mutational single nucleotide variations (SNV) data obtained by whole-genome sequencing of drug-resistant mutants induced in yeast diploids by AID/APOBEC deaminase and base analog 6-HAP. Deaminase from sea lamprey, PmCDA1, induced robust clusters, while 6-HAP induced a few weak ones. We found that PmCDA1, AID, and APOBEC1 deaminases preferentially mutate the beginning of the actively transcribed genes. Inactivation of transcription initiation factor Sub1 strongly reduced deaminase-induced can1 mutation frequency, but, surprisingly, did not decrease the total SNV load in genomes. However, the SNVs in the genomes of the sub1 clones were re-distributed, and the effect of mutation clustering in the regions of transcription initiation was even more pronounced. At the same time, the mutation density in the protein-coding regions was reduced, resulting in the decrease of phenotypically detected mutants. We propose that the induction of clustered mutations by deaminases involves: a) the exposure of ssDNA strands during transcription and loss of protection of ssDNA due to the depletion of ssDNA-binding proteins, such as Sub1, and b) attainment of conditions favorable for APOBEC action in subpopulation of cells, leading to enzymatic deamination within the currently expressed genes. This model is applicable to both the initial and the later stages of oncogenic transformation and explains variations in the distribution of mutations and kataegis events in different tumor cells. PMID:25941824

  16. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  17. Replication of alpha-satellite DNA arrays in endogenous human centromeric regions and in human artificial chromosome

    PubMed Central

    Erliandri, Indri; Fu, Haiqing; Nakano, Megumi; Kim, Jung-Hyun; Miga, Karen H.; Liskovykh, Mikhail; Earnshaw, William C.; Masumoto, Hiroshi; Kouprina, Natalay; Aladjem, Mirit I.; Larionov, Vladimir

    2014-01-01

    In human chromosomes, centromeric regions comprise megabase-size arrays of 171 bp alpha-satellite DNA monomers. The large distances spanned by these arrays preclude their replication from external sites and imply that the repetitive monomers contain replication origins. However, replication within these arrays has not previously been profiled and the role of alpha-satellite DNA in initiation of DNA replication has not yet been demonstrated. Here, replication of alpha-satellite DNA in endogenous human centromeric regions and in de novo formed Human Artificial Chromosome (HAC) was analyzed. We showed that alpha-satellite monomers could function as origins of DNA replication and that replication of alphoid arrays organized into centrochromatin occurred earlier than those organized into heterochromatin. The distribution of inter-origin distances within centromeric alphoid arrays was comparable to the distribution of inter-origin distances on randomly selected non-centromeric chromosomal regions. Depletion of CENP-B, a kinetochore protein that binds directly to a 17 bp CENP-B box motif common to alpha-satellite DNA, resulted in enrichment of alpha-satellite sequences for proteins of the ORC complex, suggesting that CENP-B may have a role in regulating the replication of centromeric regions. Mapping of replication initiation sites in the HAC revealed that replication preferentially initiated in transcriptionally active regions. PMID:25228468

  18. Probability distribution of the entanglement across a cut at an infinite-randomness fixed point

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Majumdar, Satya N.; Huse, David A.

    2017-03-01

    We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.

  19. A scaling law for random walks on networks

    PubMed Central

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-01-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870

  20. Partial transpose of random quantum states: Exact formulas and meanders

    NASA Astrophysics Data System (ADS)

    Fukuda, Motohisa; Śniady, Piotr

    2013-04-01

    We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.

  1. A scaling law for random walks on networks

    NASA Astrophysics Data System (ADS)

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  2. A scaling law for random walks on networks.

    PubMed

    Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-14

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  3. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  4. Origin and evolution of circular waves and spirals in Dictyostelium discoideum territories.

    PubMed

    Pálsson, E; Cox, E C

    1996-02-06

    Randomly distributed Dictyostelium discoideum cells form cooperative territories by signaling to each other with cAMP. Cells initiate the process by sending out pulsatile signals, which propagate as waves. With time, circular and spiral patterns form. We show that by adding spatial and temporal noise to the levels of an important regulator of external cAMP levels, the cAMP phosphodiesterase inhibitor, we can explain the natural progression of the system from randomly firing cells to circular waves whose symmetries break to form double- and single- or multi-armed spirals. When phosphodiesterase inhibitor is increased with time, mimicking experimental data, the wavelength of the spirals shortens, and a proportion of them evolve into pairs of connected spirals. We compare these results to recent experiments, finding that the temporal and spatial correspondence between experiment and model is very close.

  5. New techniques for modeling the reliability of reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.

    1986-01-01

    In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes several new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel wall thickness, and fluence distributions that vary throughout the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper.more » The effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithms for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RT/sub NDT/. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest toughness with subsequent initiation toughnesses.« less

  6. Scaling laws of passive-scalar diffusion in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan

    2017-05-01

    Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.

  7. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  8. Narrow-band generation in random distributed feedback fiber laser.

    PubMed

    Sugavanam, Srikanth; Tarasov, Nikita; Shu, Xuewen; Churkin, Dmitry V

    2013-07-15

    Narrow-band emission of spectral width down to ~0.05 nm line-width is achieved in the random distributed feedback fiber laser employing narrow-band fiber Bragg grating or fiber Fabry-Perot interferometer filters. The observed line-width is ~10 times less than line-width of other demonstrated up to date random distributed feedback fiber lasers. The random DFB laser with Fabry-Perot interferometer filter provides simultaneously multi-wavelength and narrow-band (within each line) generation with possibility of further wavelength tuning.

  9. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-positive patients.

    PubMed

    Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J

    2016-04-30

    Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  11. Hot Spots on Io: Initial Results From Galileo's Near Infrared Mapping Spectrometer

    NASA Technical Reports Server (NTRS)

    Lopes-Gautier, Rosaly; Davies, A. G.; Carlson, R.; Smythe, W.; Kamp, L.; Soderblom, L.; Leader, F. E.; Mehlman, R.

    1997-01-01

    The Near-Infrared Mapping Spectrometer on Galileo has monitored the volcanic activity on Io since June 28, 1996. This paper presents preliminary analysis of NIMS thermal data for the first four orbits of the Galileo mission. NIMS has detected 18 new hot spots and 12 others which were previously known to be active. The distribution of the hot spots on Io's surface may not be random, as hot spots surround the two bright, SO2-rich regions of Bosphorus Regio and Colchis Regio. Most hot spots scan to be persistently active from orbit to orbit and 10 of those detected were active in 1979 during the Voyager encounters. We report the distribution of hot spot temperatures and find that they are consistent with silicate volcanism.

  12. Transforming graphene nanoribbons into nanotubes by use of point defects.

    PubMed

    Sgouros, A; Sigalas, M M; Papagelis, K; Kalosakas, G

    2014-03-26

    Using molecular dynamics simulations with semi-empirical potentials, we demonstrate a method to fabricate carbon nanotubes (CNTs) from graphene nanoribbons (GNRs), by periodically inserting appropriate structural defects into the GNR crystal structure. We have found that various defect types initiate the bending of GNRs and eventually lead to the formation of CNTs. All kinds of carbon nanotubes (armchair, zigzag, chiral) can be produced with this method. The structural characteristics of the resulting CNTs, and the dependence on the different type and distribution of the defects, were examined. The smallest (largest) CNT obtained had a diameter of ∼ 5 Å (∼ 39 Å). Proper manipulation of ribbon edges controls the chirality of the CNTs formed. Finally, the effect of randomly distributed defects on the ability of GNRs to transform into CNTs is considered.

  13. Successfully recruiting a multicultural population: the DASH-Sodium experience.

    PubMed

    Kennedy, Betty M; Conlin, Paul R; Ernst, Denise; Reams, Patrice; Charleston, Jeanne B; Appel, Lawrence J

    2005-01-01

    Recruiting practices employed by the four clinical centers participating in the Dietary Approaches to Stop Hypertension (DASH)-Sodium trial were examined to assess the most successful method of obtaining participants and to describe pertinent learning experiences gained as a result of the trial. The primary recruitment strategies employed by each center were mass mailing brochures (direct, coupon packs, or other) and mass media (advertisements in newspapers, radio, and television spots). Of 412 randomized participants, 265 (64%) were from mass distribution of brochures, 62 (15%) mass media, and 85 (21%) were prior study participants, referred by word-of-mouth, or reported coming from screening events and presentations. Although the most successful method of recruitment was mass mailing brochures, three times as many brochures were distributed to obtain similar success as in the initial DASH trial.

  14. IS THE SUICIDE RATE A RANDOM WALK?

    PubMed

    Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert

    2015-06-01

    The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.

  15. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE PAGES

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    2016-11-08

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  16. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  17. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  18. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.

  19. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  20. Discovering non-random segregation of sister chromatids: the naïve treatment of a premature discovery

    PubMed Central

    Lark, Karl G.

    2013-01-01

    The discovery of non-random chromosome segregation (Figure 1) is discussed from the perspective of what was known in 1965 and 1966. The distinction between daughter, parent, or grandparent strands of DNA was developed in a bacterial system and led to the discovery that multiple copies of DNA elements of bacteria are not distributed randomly with respect to the age of the template strand. Experiments with higher eukaryotic cells demonstrated that during mitosis Mendel’s laws were violated; and the initial serendipitous choice of eukaryotic cell system led to the striking example of non-random segregation of parent and grandparent DNA template strands in primary cultures of cells derived from mouse embryos. Attempts to extrapolate these findings to established tissue culture lines demonstrated that the property could be lost. Experiments using plant root tips demonstrated that the phenomenon exists in plants and that it was, at some level, under genetic control. Despite publication in major journals and symposia (Lark et al., 1966, 1967; Lark, 1967, 1969a,b,c) the potential implications of these findings were ignored for several decades. Here we explore possible reasons for the pre-maturity (Stent, 1972) of this discovery. PMID:23378946

  1. Brain MR image segmentation based on an improved active contour model

    PubMed Central

    Meng, Xiangrui; Gu, Wenya; Zhang, Jianwei

    2017-01-01

    It is often a difficult task to accurately segment brain magnetic resonance (MR) images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%. PMID:28854235

  2. Nearest-Neighbor Distances and Aggregative Effects in Turbulence

    NASA Astrophysics Data System (ADS)

    Lanerolle, Lyon W. J.; Rothschild, B. J.; Yeung, P. K.

    2000-11-01

    The dispersive nature of turbulence which causes fluid elements to move apart (on average) is well known. Here we study another facet of turbulent mixing relevant to marine population dynamics - on how small organisms (approximated by fluid particles) are brought close to each other and allowed to interact. The crucial role played by the small scales in this process allows us to use direct numerical simulations of stationary isotropic turbulence, here with Taylor-scale Reynolds numbers (R_λ) from 38 to 91. We study the evolution of the Nearest-Neighbor Distances (NND) for collections of fluid particles initially located randomly in space satisfying Poisson-type distributions with mean values from 0.5 to 2.0 Kolmogorov length scales. Our results show that as particles begin to disperse on average, some also begin to aggregate in space. In particular, we find that (i) a significant proportion of particles are closer to each other than if their NNDs were randomly distributed, (ii) aggregative effects become stronger with R_λ, and (iii) although the mean value of NND grows monotonically with time in Kolmogorov variables, the growth rates are slower at higher R_λ. These results may assist in explaining the ``patchiness'' in plankton distributions observed in biological oceanography. Further details are given in B. J. Rothschild et al., The Biophysical Interpretation of Spatial Effects of Small-scale Turbulent Flow in the Ocean (paper in prep.).

  3. Effect of Genetic Variants, Especially CYP2C9 and VKORC1, on the Pharmacology of Warfarin

    PubMed Central

    Fung, Erik; Patsopoulos, Nikolaos A.; Belknap, Steven M.; O’Rourke, Daniel J.; Robb, John F.; Anderson, Jeffrey L.; Shworak, Nicholas W.; Moore, Jason H.

    2014-01-01

    The genes encoding the cytochrome P450 2C9 enzyme (CYP2C9) and vitamin K-epoxide reductase complex unit 1 (VKORC1) are major determinants of anticoagulant response to warfarin. Together with patient demographics and clinical information, they account for approximately one-half of the warfarin dose variance in individuals of European descent. Recent prospective and randomized controlled trial data support pharmacogenetic guidance with their use in warfarin dose initiation and titration. Benefits from pharmacogenetics-guided warfarin dosing have been reported to extend beyond the period of initial dosing, with supportive data indicating benefits to at least 3 months. The genetic effects of VKORC1 and CYP2C9 in African and Asian populations are concordant with those in individuals of European ancestry; however, frequency distribution of allelic variants can vary considerably between major populations. Future randomized controlled trials in multiethnic settings using population-specific dosing algorithms will allow us to further ascertain the generalizability and cost-effectiveness of pharmacogenetics-guided warfarin therapy. Additional genome-wide association studies may help us to improve and refine dosing algorithms and potentially identify novel biological pathways. PMID:23041981

  4. Trace element concentrations and distributions in the main body tissues and the net requirements for maintenance and growth of Dorper × Hu lambs.

    PubMed

    Zhang, H; Nie, H T; Wang, Q; Wang, Z Y; Zhang, Y L; Guo, R H; Wang, F

    2015-05-01

    A comparative slaughter trial was conducted to estimate the trace element concentrations and distributions in the main body tissues and the net requirements for maintenance and growth of Dorper × Hu crossbred lambs. Thirty-five lambs of each gender (19.2 ± 0.36 kg initial BW) were used. Seven lambs of each gender were randomly chosen and slaughtered at approximately 20 kg BW as the baseline group for measuring initial body composition. Another 7 lambs of each gender were also randomly chosen and offered a pelleted mixed diet for ad libitum intake and slaughtered at approximately 28 kg BW. The remaining 21 sheep of each gender were randomly divided into 3 groups with 7 sheep each and assigned to ad libitum or 40 or 70% of ad libitum intake of a pelleted mixed diet (42:58 concentrate:roughage, DM basis). The 3 groups of each gender were slaughtered when the sheep fed ad libitum reached approximately 35 kg BW. Empty body (head + feet, hide, viscera + blood, and carcass) trace element contents were determined after slaughter. The results showed that the trace elements were mainly distributed in viscera (blood included), except for Zn, which was mainly distributed in the muscle and bone tissues. The net requirements were calculated using the comparative slaughter technique. For males and females, the daily net trace element requirements for maintenance were 356.1 and 164.1 μg Fe, 4.3 and 3.4 μg Mn, 42.0 and 29.8 μg Cu, and 83.5 and 102.0 μg Zn per kilogram empty body weight (EBW), respectively. Net requirements for growth decreased from 65.67 to 57.27 mg Fe, 0.35 to 0.25 mg Mn, and 3.45 to 2.82 mg Cu and increased from 26.36 to 26.65 mg Zn per kilogram EBW gain (EBWG) for males. Net requirements for growth decreased from 30.66 to 22.14 mg Fe, 0.43 to 0.32 mg Mn, 2.86 to 2.18 mg Cu, and 27.71 to 25.83 mg Zn per kilogram EBWG for females from 20 to 35 kg BW. This study indicated that the net trace element requirements for Dorper × Hu crossbred lambs may be different from those of purebred or other genotypes, and more data are needed for sheep in general.

  5. Dynamical behavior of the random field on the pulsating and snaking solitons in cubic-quintic complex Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Bakhtiar, Nurizatul Syarfinas Ahmad; Abdullah, Farah Aini; Hasan, Yahya Abu

    2017-08-01

    In this paper, we consider the dynamical behaviour of the random field on the pulsating and snaking solitons in a dissipative systems described by the one-dimensional cubic-quintic complex Ginzburg-Landau equation (cqCGLE). The dynamical behaviour of the random filed was simulated by adding a random field to the initial pulse. Then, we solve it numerically by fixing the initial amplitude profile for the pulsating and snaking solitons without losing any generality. In order to create the random field, we choose 0 ≤ ɛ ≤ 1.0. As a result, multiple soliton trains are formed when the random field is applied to a pulse like initial profile for the parameters of the pulsating and snaking solitons. The results also show the effects of varying the random field of the transient energy peaks in pulsating and snaking solitons.

  6. Characterizing ISI and sub-threshold membrane potential distributions: Ensemble of IF neurons with random squared-noise intensity.

    PubMed

    Kumar, Sanjeev; Karmeshu

    2018-04-01

    A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Private randomness expansion with untrusted devices

    NASA Astrophysics Data System (ADS)

    Colbeck, Roger; Kent, Adrian

    2011-03-01

    Randomness is an important resource for many applications, from gambling to secure communication. However, guaranteeing that the output from a candidate random source could not have been predicted by an outside party is a challenging task, and many supposedly random sources used today provide no such guarantee. Quantum solutions to this problem exist, for example a device which internally sends a photon through a beamsplitter and observes on which side it emerges, but, presently, such solutions require the user to trust the internal workings of the device. Here, we seek to go beyond this limitation by asking whether randomness can be generated using untrusted devices—even ones created by an adversarial agent—while providing a guarantee that no outside party (including the agent) can predict it. Since this is easily seen to be impossible unless the user has an initially private random string, the task we investigate here is private randomness expansion. We introduce a protocol for private randomness expansion with untrusted devices which is designed to take as input an initially private random string and produce as output a longer private random string. We point out that private randomness expansion protocols are generally vulnerable to attacks that can render the initial string partially insecure, even though that string is used only inside a secure laboratory; our protocol is designed to remove this previously unconsidered vulnerability by privacy amplification. We also discuss extensions of our protocol designed to generate an arbitrarily long random string from a finite initially private random string. The security of these protocols against the most general attacks is left as an open question.

  8. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  9. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less

  10. Beyond the excised ensemble: modelling elliptic curve L-functions with random matrices

    NASA Astrophysics Data System (ADS)

    Cooper, I. A.; Morris, Patrick W.; Snaith, N. C.

    2016-02-01

    The ‘excised ensemble’, a random matrix model for the zeros of quadratic twist families of elliptic curve L-functions, was introduced by Dueñez et al (2012 J. Phys. A: Math. Theor. 45 115207) The excised model is motivated by a formula for central values of these L-functions in a paper by Kohnen and Zagier (1981 Invent. Math. 64 175-98). This formula indicates that for a finite set of L-functions from a family of quadratic twists, the central values are all either zero or are greater than some positive cutoff. The excised model imposes this same condition on the central values of characteristic polynomials of matrices from {SO}(2N). Strangely, the cutoff on the characteristic polynomials that results in a convincing model for the L-function zeros is significantly smaller than that which we would obtain by naively transferring Kohnen and Zagier’s cutoff to the {SO}(2N) ensemble. In this current paper we investigate a modification to the excised model. It lacks the simplicity of the original excised ensemble, but it serves to explain the reason for the unexpectedly low cutoff in the original excised model. Additionally, the distribution of central L-values is ‘choppier’ than the distribution of characteristic polynomials, in the sense that it is a superposition of a series of peaks: the characteristic polynomial distribution is a smooth approximation to this. The excised model did not attempt to incorporate these successive peaks, only the initial cutoff. Here we experiment with including some of the structure of the L-value distribution. The conclusion is that a critical feature of a good model is to associate the correct mass to the first peak of the L-value distribution.

  11. Robustness of power systems under a democratic-fiber-bundle-like model

    NASA Astrophysics Data System (ADS)

    Yaǧan, Osman

    2015-06-01

    We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.

  12. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  13. Evaluation of strength and failure of brittle rock containing initial cracks under lithospheric conditions

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhao; Qi, Chengzhi; Shao, Zhushan; Ma, Chao

    2018-02-01

    Natural brittle rock contains numerous randomly distributed microcracks. Crack initiation, growth, and coalescence play a predominant role in evaluation for the strength and failure of brittle rocks. A new analytical method is proposed to predict the strength and failure of brittle rocks containing initial microcracks. The formulation of this method is based on an improved wing crack model and a suggested micro-macro relation. In this improved wing crack model, the parameter of crack angle is especially introduced as a variable, and the analytical stress-crack relation considering crack angle effect is obtained. Coupling the proposed stress-crack relation and the suggested micro-macro relation describing the relation between crack growth and axial strain, the stress-strain constitutive relation is obtained to predict the rock strength and failure. Considering different initial microcrack sizes, friction coefficients and confining pressures, effects of crack angle on tensile wedge force acting on initial crack interface are studied, and effects of crack angle on stress-strain constitutive relation of rocks are also analyzed. The strength and crack initiation stress under different crack angles are discussed, and the value of most disadvantaged angle triggering crack initiation and rock failure is founded. The analytical results are similar to the published study results. Rationality of this proposed analytical method is verified.

  14. Spatial distribution of neurons innervated by chandelier cells.

    PubMed

    Blazquez-Llorca, Lidia; Woodruff, Alan; Inan, Melis; Anderson, Stewart A; Yuste, Rafael; DeFelipe, Javier; Merchan-Perez, Angel

    2015-09-01

    Chandelier (or axo-axonic) cells are a distinct group of GABAergic interneurons that innervate the axon initial segments of pyramidal cells and are thus thought to have an important role in controlling the activity of cortical circuits. To examine the circuit connectivity of chandelier cells (ChCs), we made use of a genetic targeting strategy to label neocortical ChCs in upper layers of juvenile mouse neocortex. We filled individual ChCs with biocytin in living brain slices and reconstructed their axonal arbors from serial semi-thin sections. We also reconstructed the cell somata of pyramidal neurons that were located inside the ChC axonal trees and determined the percentage of pyramidal neurons whose axon initial segments were innervated by ChC terminals. We found that the total percentage of pyramidal neurons that were innervated by a single labeled ChC was 18-22 %. Sholl analysis showed that this percentage peaked at 22-35 % for distances between 30 and 60 µm from the ChC soma, decreasing to lower percentages with increasing distances. We also studied the three-dimensional spatial distribution of the innervated neurons inside the ChC axonal arbor using spatial statistical analysis tools. We found that innervated pyramidal neurons are not distributed at random, but show a clustered distribution, with pockets where almost all cells are innervated and other regions within the ChC axonal tree that receive little or no innervation. Thus, individual ChCs may exert a strong, widespread influence on their local pyramidal neighbors in a spatially heterogeneous fashion.

  15. Calcium handling precedes cardiac differentiation to initiate the first heartbeat

    PubMed Central

    Tyser, Richard CV; Miranda, Antonio MA; Chen, Chiann-mun; Davidson, Sean M

    2016-01-01

    The mammalian heartbeat is thought to begin just prior to the linear heart tube stage of development. How the initial contractions are established and the downstream consequences of the earliest contractile function on cardiac differentiation and morphogenesis have not been described. Using high-resolution live imaging of mouse embryos, we observed randomly distributed spontaneous asynchronous Ca2+-oscillations (SACOs) in the forming cardiac crescent (stage E7.75) prior to overt beating. Nascent contraction initiated at around E8.0 and was associated with sarcomeric assembly and rapid Ca2+ transients, underpinned by sequential expression of the Na+-Ca2+ exchanger (NCX1) and L-type Ca2+ channel (LTCC). Pharmacological inhibition of NCX1 and LTCC revealed rapid development of Ca2+ handling in the early heart and an essential early role for NCX1 in establishing SACOs through to the initiation of beating. NCX1 blockade impacted on CaMKII signalling to down-regulate cardiac gene expression, leading to impaired differentiation and failed crescent maturation. DOI: http://dx.doi.org/10.7554/eLife.17113.001 PMID:27725084

  16. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  17. Ocean biogeochemistry modeled with emergent trait-based genomics.

    PubMed

    Coles, V J; Stukel, M R; Brooks, M T; Burd, A; Crump, B C; Moran, M A; Paul, J H; Satinsky, B M; Yager, P L; Zielinski, B L; Hood, R R

    2017-12-01

    Marine ecosystem models have advanced to incorporate metabolic pathways discovered with genomic sequencing, but direct comparisons between models and "omics" data are lacking. We developed a model that directly simulates metagenomes and metatranscriptomes for comparison with observations. Model microbes were randomly assigned genes for specialized functions, and communities of 68 species were simulated in the Atlantic Ocean. Unfit organisms were replaced, and the model self-organized to develop community genomes and transcriptomes. Emergent communities from simulations that were initialized with different cohorts of randomly generated microbes all produced realistic vertical and horizontal ocean nutrient, genome, and transcriptome gradients. Thus, the library of gene functions available to the community, rather than the distribution of functions among specific organisms, drove community assembly and biogeochemical gradients in the model ocean. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  18. Fluorescence correlation spectroscopy: the case of subdiffusion.

    PubMed

    Lubelski, Ariel; Klafter, Joseph

    2009-03-18

    The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.

  19. Cascading failures mechanism based on betweenness-degree ratio distribution with different connecting preferences

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Juan; Guo, Shi Ze; Jin, Lei; Chen, Mo

    We study the structural robustness of the scale free network against the cascading failure induced by overload. In this paper, a failure mechanism based on betweenness-degree ratio distribution is proposed. In the cascading failure model we built the initial load of an edge which is proportional to the node betweenness of its ends. During the edge random deletion, we find a phase transition. Then based on the phase transition, we divide the process of the cascading failure into two parts: the robust area and the vulnerable area, and define the corresponding indicator to measure the performance of the networks in both areas. From derivation, we find that the vulnerability of the network is determined by the distribution of betweenness-degree ratio. After that we use the connection between the node ability coefficient and distribution of betweenness-degree ratio to explain the cascading failure mechanism. In simulations, we verify the correctness of our derivations. By changing connecting preferences, we find scale free networks with a slight assortativity, which performs better both in robust area and vulnerable area.

  20. Analysis of regional deformation and strain accumulation data adjacent to the San Andreas fault

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    A new approach to the understanding of crustal deformation was developed under this grant. This approach combined aspects of fractals, chaos, and self-organized criticality to provide a comprehensive theory for deformation on distributed faults. It is hypothesized that crustal deformation is an example of comminution: Deformation takes place on a fractal distribution of faults resulting in a fractal distribution of seismicity. Our primary effort under this grant was devoted to developing an understanding of distributed deformation in the continental crust. An initial effort was carried out on the fractal clustering of earthquakes in time. It was shown that earthquakes do not obey random Poisson statistics, but can be approximated in many cases by coupled, scale-invariant fractal statistics. We applied our approach to the statistics of earthquakes in the New Hebrides region of the southwest Pacific because of the very high level of seismicity there. This work was written up and published in the Bulletin of the Seismological Society of America. This approach was also applied to the statistics of the seismicity on the San Andreas fault system.

  1. Optimizing the robustness of electrical power systems against cascading failures.

    PubMed

    Zhang, Yingrui; Yağan, Osman

    2016-06-21

    Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this paper, we investigate the robustness of power systems against cascading failures initiated by a random attack. This is done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load.

  2. Modeling close encounters with massive asteroids: a Markovian approach. An application to the Vesta family

    NASA Astrophysics Data System (ADS)

    Carruba, V.; Roig, F.; Michtchenko, T. A.; Ferraz-Mello, S.; Nesvorný, D.

    2007-04-01

    Context: Nearly all members of the Vesta family cross the orbits of (4) Vesta, one of the most massive asteroids in the main belt, and some of them approach it closely. When mutual velocities during such close encounters are low, the trajectory of the small body can be gravitationally deflected, consequently changing its heliocentric orbital elements. While the effect of a single close encounter may be small, repeated close encounters may significantly change the proper element distribution of members of asteroid families. Aims: We develop a model of the long-term effect of close encounters with massive asteroids, so as to be able to predict how far former members of the Vesta family could have drifted away from the family. Methods: We first developed a new symplectic integrator that simulates both the effects of close encounters and the Yarkovsky effect. We analyzed the results of a simulation involving a fictitious Vesta family, and propagated the asteroid proper element distribution using the probability density function (pdf hereafter), i.e. the function that describes the probability of having an encounter that modifies a proper element x by Δx, for all the possible values of Δx. Given any asteroids' proper element distribution at time t, the distribution at time t+T may be predicted if the pdf is known (Bachelier 1900, Théorie de la spéculation; Hughes 1995, Random Walks and Random Environments, Vol. I). Results: We applied our new method to the problem of V-type asteroids outside the Vesta family (i.e., the 31 currently known asteroids in the inner asteroid belt that have the same spectral type of members as the Vesta family, but that are outside the limits of the dynamical family) and determined that at least ten objects have a significant diffusion probability over the minimum estimated age of the Vesta family of 1.2 Gyr (Carruba et al. 2005, A&A, 441, 819). These objects can therefore be explained in the framework of diffusion via repeated close encounters with (4) Vesta of asteroids originally closer to the parent body. Conclusions: We computed diffusion probabilities at the location of four of these asteroids for various initial conditions, parametrized by values of initial ejection velocity V_ej. Based on our results, we believe the Vesta family age is (1200 ± 700) Myr old, with an initial ejection velocity of (240 ± 60) m/s. Appendices are only available in electronic form at http://www.aanda.org

  3. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  4. Universality of long-range correlations in expansion randomization systems

    NASA Astrophysics Data System (ADS)

    Messer, P. W.; Lässig, M.; Arndt, P. F.

    2005-10-01

    We study the stochastic dynamics of sequences evolving by single-site mutations, segmental duplications, deletions, and random insertions. These processes are relevant for the evolution of genomic DNA. They define a universality class of non-equilibrium 1D expansion-randomization systems with generic stationary long-range correlations in a regime of growing sequence length. We obtain explicitly the two-point correlation function of the sequence composition and the distribution function of the composition bias in sequences of finite length. The characteristic exponent χ of these quantities is determined by the ratio of two effective rates, which are explicitly calculated for several specific sequence evolution dynamics of the universality class. Depending on the value of χ, we find two different scaling regimes, which are distinguished by the detectability of the initial composition bias. All analytic results are accurately verified by numerical simulations. We also discuss the non-stationary build-up and decay of correlations, as well as more complex evolutionary scenarios, where the rates of the processes vary in time. Our findings provide a possible example for the emergence of universality in molecular biology.

  5. A ray tracing model for leaf bidirectional scattering studies

    NASA Technical Reports Server (NTRS)

    Brakke, T. W.; Smith, J. A.

    1987-01-01

    A leaf is modeled as a deterministic two-dimensional structure consisting of a network of circular arcs designed to represent the internal morphology of major species. The path of an individual ray through the leaf is computed using geometric optics. At each intersection of the ray with an arc, the specular reflected and transmitted rays are calculated according to the Snell and Fresnel equations. Diffuse scattering is treated according to Lambert's law. Absorption is also permitted but requires a detailed knowledge of the spectral attenuation coefficients. An ensemble of initial rays are chosen for each incident direction with the initial intersection points on the leaf surface selected randomly. The final equilibrium state after all interactions then yields the leaf bidirectional reflectance and transmittance distributions. The model also yields the internal two dimensional light gradient profile of the leaf.

  6. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  7. Randomness versus specifics for word-frequency distributions

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoyong; Minnhagen, Petter

    2016-02-01

    The text-length-dependence of real word-frequency distributions can be connected to the general properties of a random book. It is pointed out that this finding has strong implications, when deciding between two conceptually different views on word-frequency distributions, i.e. the specific 'Zipf's-view' and the non-specific 'Randomness-view', as is discussed. It is also noticed that the text-length transformation of a random book does have an exact scaling property precisely for the power-law index γ = 1, as opposed to the Zipf's exponent γ = 2 and the implication of this exact scaling property is discussed. However a real text has γ > 1 and as a consequence γ increases when shortening a real text. The connections to the predictions from the RGF (Random Group Formation) and to the infinite length-limit of a meta-book are also discussed. The difference between 'curve-fitting' and 'predicting' word-frequency distributions is stressed. It is pointed out that the question of randomness versus specifics for the distribution of outcomes in case of sufficiently complex systems has a much wider relevance than just the word-frequency example analyzed in the present work.

  8. Randomness determines practical security of BB84 quantum key distribution.

    PubMed

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-10

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  9. Randomness determines practical security of BB84 quantum key distribution

    PubMed Central

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-01-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system. PMID:26552359

  10. Randomness determines practical security of BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  11. The effects of noise due to random undetected tilts and paleosecular variation on regional paleomagnetic directions

    USGS Publications Warehouse

    Calderone, G.J.; Butler, R.F.

    1991-01-01

    Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors

  12. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  13. Response kinetics of tethered bacteria to stepwise changes in nutrient concentration.

    PubMed

    Chernova, Anna A; Armitage, Judith P; Packer, Helen L; Maini, Philip K

    2003-09-01

    We examined the changes in swimming behaviour of the bacterium Rhodobacter sphaeroides in response to stepwise changes in a nutrient (propionate), following the pre-stimulus motion, the initial response and the adaptation to the sustained concentration of the chemical. This was carried out by tethering motile cells by their flagella to glass slides and following the rotational behaviour of their cell bodies in response to the nutrient change. Computerised motion analysis was used to analyse the behaviour. Distributions of run and stop times were obtained from rotation data for tethered cells. Exponential and Weibull fits for these distributions, and variability in individual responses are discussed. In terms of parameters derived from the run and stop time distributions, we compare the responses to stepwise changes in the nutrient concentration and the long-term behaviour of 84 cells under 12 propionate concentration levels from 1 nM to 25 mM. We discuss traditional assumptions for the random walk approximation to bacterial swimming and compare them with the observed R. sphaeroides motile behaviour.

  14. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  15. Heat conduction in periodic laminates with probabilistic distribution of material properties

    NASA Astrophysics Data System (ADS)

    Ostrowski, Piotr; Jędrysiak, Jarosław

    2017-04-01

    This contribution deals with a problem of heat conduction in a two-phase laminate made of periodically distributed micro-laminas along one direction. In general, the Fourier's Law describing the heat conduction in a considered composite has highly oscillating and discontinuous coefficients. Therefore, the tolerance averaging technique (cf. Woźniak et al. in Thermomechanics of microheterogeneous solids and structures. Monografie - Politechnika Łódzka, Wydawnictwo Politechniki Łódzkiej, Łódź, 2008) is applied. Based on this technique, the averaged differential equations for a tolerance-asymptotic model are derived and solved analytically for given initial-boundary conditions. The second part of this contribution is an investigation of the effect of material properties ratio ω of two components on the total temperature field θ, by the assumption that conductivities of micro-laminas are not necessary uniquely described. Numerical experiments (Monte Carlo simulation) are executed under assumption that ω is a random variable with a fixed probability distribution. At the end, based on the obtained results, a crucial hypothesis is formulated.

  16. Excitation of Crossflow Instabilities in a Swept Wing Boundary Layer

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Choudhari, Meelan; Li, Fei; Streett, Craig L.; Chang, Chau-Lyan

    2010-01-01

    The problem of crossflow receptivity is considered in the context of a canonical 3D boundary layer (viz., the swept Hiemenz boundary layer) and a swept airfoil used recently in the SWIFT flight experiment performed at Texas A&M University. First, Hiemenz flow is used to analyze localized receptivity due to a spanwise periodic array of small amplitude roughness elements, with the goal of quantifying the effects of array size and location. Excitation of crossflow modes via nonlocalized but deterministic distribution of surface nonuniformity is also considered and contrasted with roughness induced acoustic excitation of Tollmien-Schlichting waves. Finally, roughness measurements on the SWIFT model are used to model the effects of random, spatially distributed roughness of sufficiently small amplitude with the eventual goal of enabling predictions of initial crossflow disturbance amplitudes as functions of surface roughness parameters.

  17. Hot spots on Io: Initial results from Galileo's near infrared mapping spectrometer

    USGS Publications Warehouse

    Lopes-Gautier, R.; Davies, A.G.; Carlson, R.; Smythe, W.; Kamp, L.; Soderblom, L.; Leader, F.E.; Mehlman, R.

    1997-01-01

    The Near-Infrared Mapping Spectrometer on Galileo has monitored the volcanic activity on Io since June 28, 1996. This paper presents preliminary analysis of NIMS thermal data for the first four orbits of the Galileo mission. NIMS has detected 18 new hot spots and 12 others which were previously known to be active. The distribution of the hot spots on Io's surface may not be random, as hot spots surround the two bright, SO2-rich regions of Bosphorus Regio and Colchis Regio. Most hot spots seem to be persistently active from orbit to orbit and 10 of those detected were active in 1979 during the Voyager encounters. We report the distribution of hot spot temperatures and find that they are consistent with silicate volcanism. Copyright 1997 by the American Geophysical Union.

  18. A re-examination of the biphasic theory of skeletal muscle growth.

    PubMed Central

    Levine, A S; Hegarty, P V

    1977-01-01

    Because of the importance of fibre diameter measurements it was decided to re-evaluate the biphasic theory of skeletal muscle growth and development. This theory proposes an initial memophasic distribution of muscle fibres which changes to a biphasic distribution during development. The theory is based on observations made on certain muscles in mice, where two distinct populations of fibre diameters (20 and 40 micronm) contribute to the biphasic distribution. In the present investigation corss sections of frozen biceps brachii of mice in rigor mortis were examined. The rigor state was used to avoid complications produced by thaw-rigor contraction. The diameters of the outermost and innermost fibres were found to be significantly different. However, if the outer and inner fibres were combined to form one group, no significant difference between this group and other random groups was found. The distributions of all groups were monophasic. The diameters of isolated fibres from mice and rats also displayed a monophasic distribution. This evidence leads to the conclusion that the biphasic theory of muscle growth is untenable. Some of the variables which may occur in fibre size and shape are discussed. Images Fig. 1 PMID:858691

  19. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  20. The invariant statistical rule of aerosol scattering pulse signal modulated by random noise

    NASA Astrophysics Data System (ADS)

    Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua

    2010-11-01

    A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.

  1. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  2. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  3. Evidence for a bimodal distribution in human communication.

    PubMed

    Wu, Ye; Zhou, Changsong; Xiao, Jinghua; Kurths, Jürgen; Schellnhuber, Hans Joachim

    2010-11-02

    Interacting human activities underlie the patterns of many social, technological, and economic phenomena. Here we present clear empirical evidence from Short Message correspondence that observed human actions are the result of the interplay of three basic ingredients: Poisson initiation of tasks and decision making for task execution in individual humans as well as interaction among individuals. This interplay leads to new types of interevent time distribution, neither completely Poisson nor power-law, but a bimodal combination of them. We show that the events can be separated into independent bursts which are generated by frequent mutual interactions in short times following random initiations of communications in longer times by the individuals. We introduce a minimal model of two interacting priority queues incorporating the three basic ingredients which fits well the distributions using the parameters extracted from the empirical data. The model can also embrace a range of realistic social interacting systems such as e-mail and letter communications when taking the time scale of processing into account. Our findings provide insight into various human activities both at the individual and network level. Our analysis and modeling of bimodal activity in human communication from the viewpoint of the interplay between processes of different time scales is likely to shed light on bimodal phenomena in other complex systems, such as interevent times in earthquakes, rainfall, forest fire, and economic systems, etc.

  4. Evidence for a bimodal distribution in human communication

    PubMed Central

    Wu, Ye; Zhou, Changsong; Xiao, Jinghua; Kurths, Jürgen; Schellnhuber, Hans Joachim

    2010-01-01

    Interacting human activities underlie the patterns of many social, technological, and economic phenomena. Here we present clear empirical evidence from Short Message correspondence that observed human actions are the result of the interplay of three basic ingredients: Poisson initiation of tasks and decision making for task execution in individual humans as well as interaction among individuals. This interplay leads to new types of interevent time distribution, neither completely Poisson nor power-law, but a bimodal combination of them. We show that the events can be separated into independent bursts which are generated by frequent mutual interactions in short times following random initiations of communications in longer times by the individuals. We introduce a minimal model of two interacting priority queues incorporating the three basic ingredients which fits well the distributions using the parameters extracted from the empirical data. The model can also embrace a range of realistic social interacting systems such as e-mail and letter communications when taking the time scale of processing into account. Our findings provide insight into various human activities both at the individual and network level. Our analysis and modeling of bimodal activity in human communication from the viewpoint of the interplay between processes of different time scales is likely to shed light on bimodal phenomena in other complex systems, such as interevent times in earthquakes, rainfall, forest fire, and economic systems, etc. PMID:20959414

  5. Genome-wide control of the distribution of meiotic recombination.

    PubMed

    Grey, Corinne; Baudat, Frédéric; de Massy, Bernard

    2009-02-17

    Meiotic recombination events are not randomly distributed in the genome but occur in specific regions called recombination hotspots. Hotspots are predicted to be preferred sites for the initiation of meiotic recombination and their positions and activities are regulated by yet-unknown controls. The activity of the Psmb9 hotspot on mouse Chromosome 17 (Chr 17) varies according to genetic background. It is active in strains carrying a recombinant Chr 17 where the proximal third is derived from Mus musculus molossinus. We have identified the genetic locus required for Psmb9 activity, named Dsbc1 for Double-strand break control 1, and mapped this locus within a 6.7-Mb region on Chr 17. Based on cytological analysis of meiotic DNA double-strand breaks (DSB) and crossovers (COs), we show that Dsbc1 influences DSB and CO, not only at Psmb9, but in several other regions of Chr 17. We further show that CO distribution is also influenced by Dsbc1 on Chrs 15 and 18. Finally, we provide direct molecular evidence for the regulation in trans mediated by Dsbc1, by showing that it controls the CO activity at the Hlx1 hotspot on Chr 1. We thus propose that Dsbc1 encodes for a trans-acting factor involved in the specification of initiation sites of meiotic recombination genome wide in mice.

  6. Occupational hazards and safety measures amongst the paint factory workers in lagos, Nigeria.

    PubMed

    Awodele, Olufunsho; Popoola, Temidayo D; Ogbudu, Bawo S; Akinyede, Akin; Coker, Herbert A B; Akintonwa, Alade

    2014-06-01

    The manufacture of paint involves a variety of processes that present with medical hazards. Safety initiatives are hence introduced to limit hazard exposures and promote workplace safety. This aim of this study is to assess the use of available control measures/initiatives in selected paint factories in Lagos West Senatorial District, Nigeria. A total of 400 randomly selected paint factory workers were involved in the study. A well-structured World Health Organization standard questionnaire was designed and distributed to the workers to elicit information on awareness to occupational hazards, use of personal protective devices, and commonly experienced adverse symptoms. Urine samples were obtained from 50 workers randomly selected from these 400 participants, and the concentrations of the heavy metals (lead, cadmium, arsenic, and chromium) were determined using atomic absorption spectroscopy. The results show that 72.5% of the respondents are aware of the hazards associated with their jobs; 30% have had formal training on hazards and safety measures; 40% do not use personal protective devices, and 90% of the respondents reported symptoms relating to hazard exposure. There was a statistically significant (p < 0.05) increase in the mean heavy metal concentrations in the urine samples obtained from paint factory workers as compared with nonfactory workers. The need to develop effective frameworks that will initiate the integration and ensure implementation of safety regulations in paint factories is evident. Where these exist, there is a need to promote adherence to these practice guidelines.

  7. Distribution of shortest cycle lengths in random networks

    NASA Astrophysics Data System (ADS)

    Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan

    2017-12-01

    We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.

  8. Neural Mechanisms Behind Identification of Leptokurtic Noise and Adaptive Behavioral Response

    PubMed Central

    d'Acremont, Mathieu; Bossaerts, Peter

    2016-01-01

    Large-scale human interaction through, for example, financial markets causes ceaseless random changes in outcome variability, producing frequent and salient outliers that render the outcome distribution more peaked than the Gaussian distribution, and with longer tails. Here, we study how humans cope with this evolutionary novel leptokurtic noise, focusing on the neurobiological mechanisms that allow the brain, 1) to recognize the outliers as noise and 2) to regulate the control necessary for adaptive response. We used functional magnetic resonance imaging, while participants tracked a target whose movements were affected by leptokurtic noise. After initial overreaction and insufficient subsequent correction, participants improved performance significantly. Yet, persistently long reaction times pointed to continued need for vigilance and control. We ran a contrasting treatment where outliers reflected permanent moves of the target, as in traditional mean-shift paradigms. Importantly, outliers were equally frequent and salient. There, control was superior and reaction time was faster. We present a novel reinforcement learning model that fits observed choices better than the Bayes-optimal model. Only anterior insula discriminated between the 2 types of outliers. In both treatments, outliers initially activated an extensive bottom-up attention and belief network, followed by sustained engagement of the fronto-parietal control network. PMID:26850528

  9. Oral anticoagulant re-initiation following intracerebral hemorrhage in non-valvular atrial fibrillation: Global survey of the practices of neurologists, neurosurgeons and thrombosis experts.

    PubMed

    Xu, Yan; Shoamanesh, Ashkan; Schulman, Sam; Dowlatshahi, Dar; Al-Shahi Salman, Rustam; Moldovan, Ioana Doina; Wells, Philip Stephen; AlKherayf, Fahad

    2018-01-01

    While oral anticoagulants (OACs) are highly effective for ischemic stroke prevention in atrial fibrillation, intracerebral hemorrhage (ICH) remains the most feared complication of OAC. Clinical controversy remains regarding OAC resumption and its timing for ICH survivors with atrial fibrillation because the balance between risks and benefits has not been investigated in randomized trials. To survey the practice of stroke neurologists, thrombosis experts and neurosurgeons on OAC re-initiation following OAC-associated ICH. An online survey was distributed to members of the International Society for Thrombosis and Haemostasis, Canadian Stroke Consortium, NAVIGATE-ESUS trial investigators (Clinicatrials.gov identifier NCT02313909) and American Association of Neurological Surgeons. Demographic factors and 11 clinical scenarios were included. Two hundred twenty-eight participants from 38 countries completed the survey. Majority of participants were affiliated with academic centers, and >20% managed more than 15 OAC-associated ICH patients/year. Proportion of respondents suggesting OAC anticoagulant resumption varied from 30% (for cerebral amyloid angiopathy) to 98% (for traumatic ICH). Within this group, there was wide distribution in response for timing of resumption: 21.4% preferred to re-start OACs after 1-3 weeks of incident ICH, while 25.3% opted to start after 1-3 months. Neurosurgery respondents preferred earlier OAC resumption compared to stroke neurologists or thrombosis experts in 5 scenarios (p<0.05 by Kendall's tau). Wide variations in current practice exist among management of OAC-associated ICH, with decisions influenced by patient- and provider-related factors. As these variations likely reflect the lack of high quality evidence, randomized trials are direly needed in this population.

  10. Pore Pressure and Stress Distributions Around a Hydraulic Fracture in Heterogeneous Rock

    NASA Astrophysics Data System (ADS)

    Gao, Qian; Ghassemi, Ahmad

    2017-12-01

    One of the most significant characteristics of unconventional petroleum bearing formations is their heterogeneity, which affects the stress distribution, hydraulic fracture propagation and also fluid flow. This study focuses on the stress and pore pressure redistributions during hydraulic stimulation in a heterogeneous poroelastic rock. Lognormal random distributions of Young's modulus and permeability are generated to simulate the heterogeneous distributions of material properties. A 3D fully coupled poroelastic model based on the finite element method is presented utilizing a displacement-pressure formulation. In order to verify the model, numerical results are compared with analytical solutions showing excellent agreements. The effects of heterogeneities on stress and pore pressure distributions around a penny-shaped fracture in poroelastic rock are then analyzed. Results indicate that the stress and pore pressure distributions are more complex in a heterogeneous reservoir than in a homogeneous one. The spatial extent of stress reorientation during hydraulic stimulations is a function of time and is continuously changing due to the diffusion of pore pressure in the heterogeneous system. In contrast to the stress distributions in homogeneous media, irregular distributions of stresses and pore pressure are observed. Due to the change of material properties, shear stresses and nonuniform deformations are generated. The induced shear stresses in heterogeneous rock cause the initial horizontal principal stresses to rotate out of horizontal planes.

  11. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  12. Boundary-layer receptivity due to distributed surface imperfections of a deterministic or random nature

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan

    1992-01-01

    Acoustic receptivity of a Blasius boundary layer in the presence of distributed surface irregularities is investigated analytically. It is shown that, out of the entire spatial spectrum of the surface irregularities, only a small band of Fourier components can lead to an efficient conversion of the acoustic input at any given frequency to an unstable eigenmode of the boundary layer flow. The location, and width, of this most receptive band of wavenumbers corresponds to a relative detuning of O(R sub l.b.(exp -3/8)) with respect to the lower-neutral instability wavenumber at the frequency under consideration, R sub l.b. being the Reynolds number based on a typical boundary-layer thickness at the lower branch of the neutral stability curve. Surface imperfections in the form of discrete mode waviness in this range of wavenumbers lead to initial instability amplitudes which are O(R sub l.b.(exp 3/8)) larger than those caused by a single, isolated roughness element. In contrast, irregularities with a continuous spatial spectrum produce much smaller instability amplitudes, even compared to the isolated case, since the increase due to the resonant nature of the response is more than that compensated for by the asymptotically small band-width of the receptivity process. Analytical expressions for the maximum possible instability amplitudes, as well as their expectation for an ensemble of statistically irregular surfaces with random phase distributions, are also presented.

  13. [Can the local energy minimization refine the PDB structures of different resolution universally?].

    PubMed

    Godzi, M G; Gromova, A P; Oferkin, I V; Mironov, P V

    2009-01-01

    The local energy minimization was statistically validated as the refinement strategy for PDB structure pairs of different resolution. Thirteen pairs of structures with the only difference in resolution were extracted from PDB, and the structures of 11 identical proteins obtained by different X-ray diffraction techniques were represented. The distribution of RMSD value was calculated for these pairs before and after the local energy minimization of each structure. The MMFF94 field was used for energy calculations, and the quasi-Newton method was used for local energy minimization. By comparison of these two RMSD distributions, the local energy minimization was proved to statistically increase the structural differences in pairs so that it cannot be used for refinement purposes. To explore the prospects of complex refinement strategies based on energy minimization, randomized structures were obtained by moving the initial PDB structures as far as the minimized structures had been moved in a multidimensional space of atomic coordinates. For these randomized structures, the RMSD distribution was calculated and compared with that for minimized structures. The significant differences in their mean values proved the energy surface of the protein to have only few minima near the conformations of different resolution obtained by X-ray diffraction for PDB. Some other results obtained by exploring the energy surface near these conformations are also presented. These results are expected to be very useful for the development of new protein refinement strategies based on energy minimization.

  14. Analyzing Molecular Clouds with the Spectral Correlation Function

    NASA Astrophysics Data System (ADS)

    Rosolowsky, E. W.; Goodman, A. A.; Williams, J. P.; Wilner, D. J.

    1997-12-01

    The Spectral Correlation Function (SCF) is a new data analysis algorithm that measures how the properites of spectra vary from position to position in a spectral-line map. For each spectrum in a data cube, the SCF measures the ``difference" between that spectrum and a specified subset of its neighbors. This algorithm is intended for use on both simulated and observed position-position-velocity data cubes. In initial tests of the SCF, we have shown that a histogram of the SCF for a map is a good descriptor of the spatial-velocity distribution of material. In one test, we compare the SCF distributions for: 1) a real data cube; 2) a cube made from the real cube's spectra with randomized positions; and 3) the results of a preliminary MHD simulation by Gammie, Ostriker, and Stone. The results of the test show that the real cloud and the simulation are much closer to each other in their SCF distributions than is either to the randomized cube. We are now in the process of applying the SCF to a larger set of observed and simulated data cubes. Our ultimate aim is to use the SCF both on its own, as a descriptor of the spatial-kinetic properties of interstellar gas, and also as a tool for evaluating how well simulations resemble observations. Our expectation is that the SCF will be more discriminatory (less likely to produce a false match) than the data cube descriptors currently available.

  15. Are all data created equal?--Exploring some boundary conditions for a lazy intuitive statistician.

    PubMed

    Lindskog, Marcus; Winman, Anders

    2014-01-01

    The study investigated potential effects of the presentation order of numeric information on retrospective subjective judgments of descriptive statistics of this information. The studies were theoretically motivated by the assumption in the naïve sampling model of independence between temporal encoding order of data in long-term memory and retrieval probability (i.e. as implied by a "random sampling" from memory metaphor). In Experiment 1, participants experienced Arabic numbers that varied in distribution shape/variability between the first and the second half of the information sequence. Results showed no effects of order on judgments of mean, variability or distribution shape. To strengthen the interpretation of these results, Experiment 2 used a repeated judgment procedure, with an initial judgment occurring prior to the change in distribution shape of the information half-way through data presentation. The results of Experiment 2 were in line with those from Experiment 1, and in addition showed that the act of making explicit judgments did not impair accuracy of later judgments, as would be suggested by an anchoring and insufficient adjustment strategy. Overall, the results indicated that participants were very responsive to the properties of the data while at the same time being more or less immune to order effects. The results were interpreted as being in line with the naïve sampling models in which values are stored as exemplars and sampled randomly from long-term memory.

  16. EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning.

    PubMed

    Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin

    2018-05-01

    Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Generating equilateral random polygons in confinement

    NASA Astrophysics Data System (ADS)

    Diao, Y.; Ernst, C.; Montemayor, A.; Ziegler, U.

    2011-10-01

    One challenging problem in biology is to understand the mechanism of DNA packing in a confined volume such as a cell. It is known that confined circular DNA is often knotted and hence the topology of the extracted (and relaxed) circular DNA can be used as a probe of the DNA packing mechanism. However, in order to properly estimate the topological properties of the confined circular DNA structures using mathematical models, it is necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths that are confined in a volume such as a sphere of certain fixed radius. Finding efficient algorithms that properly sample the space of such confined equilateral random polygons is a difficult problem. In this paper, we propose a method that generates confined equilateral random polygons based on their probability distribution. This method requires the creation of a large database initially. However, once the database has been created, a confined equilateral random polygon of length n can be generated in linear time in terms of n. The errors introduced by the method can be controlled and reduced by the refinement of the database. Furthermore, our numerical simulations indicate that these errors are unbiased and tend to cancel each other in a long polygon.

  18. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  19. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  20. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  1. Impacts of Non-Divergence-Free Flows on the Coalescence of Initially Distant Buoyant Scalars on a Turbulent Free Surface

    NASA Astrophysics Data System (ADS)

    Pratt, K.; Crimaldi, J. P.

    2016-02-01

    Lagrangian Coherent Structures (LCS) have been shown to play a predictive role in the coalescence of initially distant scalars in incompressible flows. Buoyant scalars on the free surface of a 3D incompressible turbulent fluid, however, are advected by a 2D compressible velocity field, resulting in scalar distributions that differ from those seen in a 2D incompressible flow. Our research uses both numerical and experimental approaches to investigate the coalescence of two initially distant reactive scalars to infer the impact of non-divergence-free behavior on buoyant scalar coalescence. Preliminary numerical results, utilizing incompressible and compressible chaotic 2D models, indicate that non-divergence-free behavior increases the likelihood of scalar coalescence and therefore enhances any interactions or reactions between the scalars. In addition, the shape and distribution of LCS is altered in compressible flows, which may explain the increased likelihood of scalar coalescence. Experimentally, we have constructed a 60 X 60 X 60 cm tank that generates three-dimensional turbulence via random pulsing of 36 jets on the tank bottom. Buoyant fluorescent red and green particles are used to quantify coalescence. Through the addition of a thin surfactant film on the free surface, results for incompressible flow cases are also obtained and directly compared to the compressible results. From these results, we hope to elucidate the role of free-surface flow on the coalescence of initially distant buoyant scalars, and extend these results to oceanic mixing problems, such as the transport of phytoplankton blooms and oil spills.

  2. Diffusion-driven self-assembly of rodlike particles: Monte Carlo simulation on a square lattice

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Gigiberiya, Volodymyr A.; Vygornitskii, Nikolai V.

    2017-05-01

    The diffusion-driven self-assembly of rodlike particles was studied by means of Monte Carlo simulation. The rods were represented as linear k -mers (i.e., particles occupying k adjacent sites). In the initial state, they were deposited onto a two-dimensional square lattice of size L ×L up to the jamming concentration using a random sequential adsorption algorithm. The size of the lattice, L , was varied from 128 to 2048, and periodic boundary conditions were applied along both x and y axes, while the length of the k -mers (determining the aspect ratio) was varied from 2 to 12. The k -mers oriented along the x and y directions (kx-mers and ky-mers, respectively) were deposited equiprobably. In the course of the simulation, the numbers of intraspecific and interspecific contacts between the same sort and between different sorts of k -mers, respectively, were calculated. Both the shift ratio of the actual number of shifts along the longitudinal or transverse axes of the k -mers and the electrical conductivity of the system were also examined. For the initial random configuration, quite different self-organization behavior was observed for short and long k -mers. For long k -mers (k ≥6 ), three main stages of diffusion-driven spatial segregation (self-assembly) were identified: the initial stage, reflecting destruction of the jamming state; the intermediate stage, reflecting continuous cluster coarsening and labyrinth pattern formation; and the final stage, reflecting the formation of diagonal stripe domains. Additional examination of two artificially constructed initial configurations showed that this pattern of diagonal stripe domains is an attractor, i.e., any spatial distribution of k -mers tends to transform into diagonal stripes. Nevertheless, the time for relaxation to the steady state essentially increases as the lattice size growth.

  3. Random walks with random velocities.

    PubMed

    Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger

    2008-07-01

    We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.

  4. Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks.

    PubMed

    Sendiña-Nadal, I; Danziger, M M; Wang, Z; Havlin, S; Boccaletti, S

    2016-02-18

    Real-world networks have distinct topologies, with marked deviations from purely random networks. Many of them exhibit degree-assortativity, with nodes of similar degree more likely to link to one another. Though microscopic mechanisms have been suggested for the emergence of other topological features, assortativity has proven elusive. Assortativity can be artificially implanted in a network via degree-preserving link permutations, however this destroys the graph's hierarchical clustering and does not correspond to any microscopic mechanism. Here, we propose the first generative model which creates heterogeneous networks with scale-free-like properties in degree and clustering distributions and tunable realistic assortativity. Two distinct populations of nodes are incrementally added to an initial network by selecting a subgraph to connect to at random. One population (the followers) follows preferential attachment, while the other population (the potential leaders) connects via anti-preferential attachment: they link to lower degree nodes when added to the network. By selecting the lower degree nodes, the potential leader nodes maintain high visibility during the growth process, eventually growing into hubs. The evolution of links in Facebook empirically validates the connection between the initial anti-preferential attachment and long term high degree. In this way, our work sheds new light on the structure and evolution of social networks.

  5. Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Sendiña-Nadal, I.; Danziger, M. M.; Wang, Z.; Havlin, S.; Boccaletti, S.

    2016-02-01

    Real-world networks have distinct topologies, with marked deviations from purely random networks. Many of them exhibit degree-assortativity, with nodes of similar degree more likely to link to one another. Though microscopic mechanisms have been suggested for the emergence of other topological features, assortativity has proven elusive. Assortativity can be artificially implanted in a network via degree-preserving link permutations, however this destroys the graph’s hierarchical clustering and does not correspond to any microscopic mechanism. Here, we propose the first generative model which creates heterogeneous networks with scale-free-like properties in degree and clustering distributions and tunable realistic assortativity. Two distinct populations of nodes are incrementally added to an initial network by selecting a subgraph to connect to at random. One population (the followers) follows preferential attachment, while the other population (the potential leaders) connects via anti-preferential attachment: they link to lower degree nodes when added to the network. By selecting the lower degree nodes, the potential leader nodes maintain high visibility during the growth process, eventually growing into hubs. The evolution of links in Facebook empirically validates the connection between the initial anti-preferential attachment and long term high degree. In this way, our work sheds new light on the structure and evolution of social networks.

  6. Modeling species-abundance relationships in multi-species collections

    USGS Publications Warehouse

    Peng, S.; Yin, Z.; Ren, H.; Guo, Q.

    2003-01-01

    Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.

  7. Does prism width from the shell prismatic layer have a random distribution?

    NASA Astrophysics Data System (ADS)

    Vancolen, Séverine; Verrecchia, Eric

    2008-10-01

    A study of the distribution of the prism width inside the prismatic layer of Unio tumidus (Philipsson 1788, Diss Hist-Nat, Berling, Lundæ) from Lake Neuchâtel, Switzerland, has been conducted in order to determine whether or not this distribution is random. Measurements of 954 to 1,343 prism widths (depending on shell sample) have been made using a scanning electron microscope in backscattered electron mode. A white noise test has been applied to the distribution of prism sizes (i.e. width). It shows that there is no temporal cycle that could potentially influence their formation and growth. These results suggest that prism widths are randomly distributed, and related neither to external rings nor to environmental constraints.

  8. Super-resolving random-Gaussian apodized photon sieve.

    PubMed

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  9. Beta blocker infusion decreases the magnitude of core hypothermia after anesthesia induction.

    PubMed

    Inoue, S; Abe, R; Kawaguchi, M; Kobayashi, H; Furuya, H

    2010-12-01

    Beta-1-receptor blockade reduces heart rate, cardiac output, and arterial pressure while increasing peripheral vascular resistance. It is possible that beta blockers not only inhibit the core-to-peripheral re-distribution of body heat and cutaneous heat loss due to vasodilation after anesthesia induction but also reduce the convective transfer of heat from the core to peripheral tissues by decreasing cardiac output. The authors investigated whether the co-administration of esmolol or landiolol, ultra-short-acting beta blockers, attenuates the magnitude of initial re-distribution hypothermia after anesthesia induction and tracheal intubation. Immediately prior to the induction of anesthesia, patients were randomly assigned to receive 0.2 mg kg-1 of landiolol (landiolol group; N=30), 1 mg kg-1 of esmolol (esmolol group; N=30), or 0.1 mL kg-1 of saline (control group; N=30). Heart rate, blood pressure, cardiac output, and tympanic, forearm, and digit temperatures were recorded. Forearm minus fingertip skin-surface temperature gradients (temperature gradient) were calculated. Tympanic membrane temperatures 15 to 60 min after the induction of anesthesia were significantly higher in the esmolol group than in the control group although the temperature gradient was similar among the three groups. Both esmolol and landiolol inhibited the increase in HR and MAP after the induction of anesthesia and tracheal intubation. The cardiac index in the esmolol group was significantly lower than in the control group. The degree of hemodynamic attenuation after induction by esmolol was larger than that of landiolol. The co-administration of esmolol, but not landiolol, attenuated the magnitude of initial re-distribution hypothermia after anesthesia induction and tracheal intubation. Esmolol likely prevented initial hypothermia because it attenuated the convective transfer of heat from the core to peripheral tissues by decreasing cardiac output.

  10. Effects of Alteplase for Acute Stroke on the Distribution of Functional Outcomes: A Pooled Analysis of 9 Trials.

    PubMed

    Lees, Kennedy R; Emberson, Jonathan; Blackwell, Lisa; Bluhmki, Erich; Davis, Stephen M; Donnan, Geoffrey A; Grotta, James C; Kaste, Markku; von Kummer, Rüdiger; Lansberg, Maarten G; Lindley, Richard I; Lyden, Patrick; Murray, Gordon D; Sandercock, Peter A G; Toni, Danilo; Toyoda, Kazunori; Wardlaw, Joanna M; Whiteley, William N; Baigent, Colin; Hacke, Werner; Howard, George

    2016-09-01

    Thrombolytic therapy with intravenous alteplase within 4.5 hours of ischemic stroke onset increases the overall likelihood of an excellent outcome (no, or nondisabling, symptoms). Any improvement in functional outcome distribution has value, and herein we provide an assessment of the effect of alteplase on the distribution of the functional level by treatment delay, age, and stroke severity. Prespecified pooled analysis of 6756 patients from 9 randomized trials comparing alteplase versus placebo/open control. Ordinal logistic regression models assessed treatment differences after adjustment for treatment delay, age, stroke severity, and relevant interaction term(s). Treatment with alteplase was beneficial for a delay in treatment extending to 4.5 hours after stroke onset, with a greater benefit with earlier treatment. Neither age nor stroke severity significantly influenced the slope of the relationship between benefit and time to treatment initiation. For the observed case mix of patients treated within 4.5 hours of stroke onset (mean 3 hours and 20 minutes), the net absolute benefit from alteplase (ie, the difference between those who would do better if given alteplase and those who would do worse) was 55 patients per 1000 treated (95% confidence interval, 13-91; P=0.004). Treatment with intravenous alteplase initiated within 4.5 hours of stroke onset increases the chance of achieving an improved level of function for all patients across the age spectrum, including the over 80s and across all severities of stroke studied (top versus bottom fifth means: 22 versus 4); the earlier that treatment is initiated, the greater the benefit. © 2016 American Heart Association, Inc.

  11. Spatial Distribution of Phase Singularities in Optical Random Vector Waves.

    PubMed

    De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L

    2016-08-26

    Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.

  12. 3D numerical simulations of multiphase continental rifting

    NASA Astrophysics Data System (ADS)

    Naliboff, J.; Glerum, A.; Brune, S.

    2017-12-01

    Observations of rifted margin architecture suggest continental breakup occurs through multiple phases of extension with distinct styles of deformation. The initial rifting stages are often characterized by slow extension rates and distributed normal faulting in the upper crust decoupled from deformation in the lower crust and mantle lithosphere. Further rifting marks a transition to higher extension rates and coupling between the crust and mantle lithosphere, with deformation typically focused along large-scale detachment faults. Significantly, recent detailed reconstructions and high-resolution 2D numerical simulations suggest that rather than remaining focused on a single long-lived detachment fault, deformation in this phase may progress toward lithospheric breakup through a complex process of fault interaction and development. The numerical simulations also suggest that an initial phase of distributed normal faulting can play a key role in the development of these complex fault networks and the resulting finite deformation patterns. Motivated by these findings, we will present 3D numerical simulations of continental rifting that examine the role of temporal increases in extension velocity on rifted margin structure. The numerical simulations are developed with the massively parallel finite-element code ASPECT. While originally designed to model mantle convection using advanced solvers and adaptive mesh refinement techniques, ASPECT has been extended to model visco-plastic deformation that combines a Drucker Prager yield criterion with non-linear dislocation and diffusion creep. To promote deformation localization, the internal friction angle and cohesion weaken as a function of accumulated plastic strain. Rather than prescribing a single zone of weakness to initiate deformation, an initial random perturbation of the plastic strain field combined with rapid strain weakening produces distributed normal faulting at relatively slow rates of extension in both 2D and 3D simulations. Our presentation will focus on both the numerical assumptions required to produce these results and variations in 3D rifted margin architecture arising from a transition from slow to rapid rates of extension.

  13. Random distributed feedback fiber laser at 2.1  μm.

    PubMed

    Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin

    2016-11-01

    We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.

  14. Manipulation of particles by weak forces

    NASA Technical Reports Server (NTRS)

    Adler, M. S.; Savkar, S. D.; Summerhayes, H. R.

    1972-01-01

    Quantitative relations between various force fields and their effects on the motion of particles of various sizes and physical characteristics were studied. The forces considered were those derived from light, heat, microwaves, electric interactions, magnetic interactions, particulate interactions, and sound. A physical understanding is given of the forces considered as well as formulae which express how the size of the force depends on the physical and electrical properties of the particle. The drift velocity in a viscous fluid is evaluated as a function of initial acceleration and the effects of thermal random motion are considered. A means of selectively sorting or moving particles by choosing a force system and/or environment such that the particle of interest reacts uniquely was developed. The forces considered and a demonstration of how the initial acceleration, drift velocity, and ultimate particle density distribution is affected by particle, input, and environmental parameters are tabulated.

  15. Monte Carlo based NMR simulations of open fractures in porous media

    NASA Astrophysics Data System (ADS)

    Lukács, Tamás; Balázs, László

    2014-05-01

    According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a basis in estimation of the porosity and can be use of identifying small-grained porous media.

  16. Numbering questionnaires had no impact on the response rate and only a slight influence on the response content of a patient safety culture survey: a randomized trial.

    PubMed

    Kundig, François; Staines, Anthony; Kinge, Thompson; Perneger, Thomas V

    2011-11-01

    In self-completed surveys, anonymous questionnaires are sometimes numbered so as to avoid sending reminders to initial nonrespondents. This number may be perceived as a threat to confidentiality by some respondents, which may reduce the response rate, or cause social desirability bias. In this study, we evaluated whether using nonnumbered vs. numbered questionnaires influenced the response rate and the response content. During a patient safety culture survey, we randomized participants into two groups: one received an anonymous nonnumbered questionnaire and the other a numbered questionnaire. We compared the survey response rates and distributions of the responses for the 42-questionnaire items across the two groups. Response rates were similar in the two groups (nonnumbered, 75.2%; numbered, 72.8%; difference, 2.4%; P=0.28). Five of the 42 questions had statistically significant differences in distributions, but these differences were small. Unexpectedly, in all five instances, the patient safety culture ratings were more favorable in the nonnumbered group. Numbering of mailed questionnaires had no impact on the response rate. Numbering influenced significantly the response content of several items, but these differences were small and ran against the hypothesis of social desirability bias. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  18. The fracture load and failure types of veneered anterior zirconia crowns: an analysis of normal and Weibull distribution of complete and censored data.

    PubMed

    Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata

    2012-05-01

    The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  19. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  20. Bivariate- distribution for transition matrix elements in Breit-Wigner to Gaussian domains of interacting particle systems.

    PubMed

    Kota, V K B; Chavda, N D; Sahu, R

    2006-04-01

    Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.

  1. Gossip-Based Dissemination

    NASA Astrophysics Data System (ADS)

    Friedman, Roy; Kermarrec, Anne-Marie; Miranda, Hugo; Rodrigues, Luís

    Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].

  2. An information hidden model holding cover distributions

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Dai, Zuxu

    2018-03-01

    The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.

  3. The Excursion set approach: Stratonovich approximation and Cholesky decomposition

    NASA Astrophysics Data System (ADS)

    Nikakhtar, Farnik; Ayromlou, Mohammadreza; Baghram, Shant; Rahvar, Sohrab; Tabar, M. Reza Rahimi; Sheth, Ravi K.

    2018-05-01

    The excursion set approach is a framework for estimating how the number density of nonlinear structures in the cosmic web depends on the expansion history of the universe and the nature of gravity. A key part of the approach is the estimation of the first crossing distribution of a suitably chosen barrier by random walks having correlated steps: The shape of the barrier is determined by the physics of nonlinear collapse, and the correlations between steps by the nature of the initial density fluctuation field. We describe analytic and numerical methods for calculating such first up-crossing distributions. While the exact solution can be written formally as an infinite series, we show how to approximate it efficiently using the Stratonovich approximation. We demonstrate its accuracy using Monte-Carlo realizations of the walks, which we generate using a novel Cholesky-decomposition based algorithm, which is significantly faster than the algorithm that is currently in the literature.

  4. Diffusion with stochastic resetting at power-law times.

    PubMed

    Nagar, Apoorva; Gupta, Shamik

    2016-06-01

    What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.

  5. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  6. Melt structure and self-nucleation of ethylene copolymers

    NASA Astrophysics Data System (ADS)

    Alamo, Rufina G.

    A strong memory effect of crystallization has been observed in melts of random ethylene copolymers well above the equilibrium melting temperature. These studies have been carried out by DSC, x-ray, TEM and optical microscopy on a large number of model, narrow, and broad copolymers with different comonomer types and contents. Melt memory is correlated with self-seeds that increase the crystallization rate of ethylene copolymers. The seeds are associated with molten ethylene sequences from the initial crystals that remain in close proximity and lower the nucleation barrier. Diffusion of all sequences to a randomized melt state is a slow process, restricted by topological chain constraints (loops, knots, and other entanglements) that build in the intercrystalline region during crystallization. Self-seeds dissolve above a critical melt temperature that demarcates homogeneity of the copolymer melt. There is a critical threshold level of crystallinity to observe the effect of melt memory on crystallization rate, thus supporting the correlation between melt memory and the change in melt structure during copolymer crystallization. Unlike binary blends, commercial ethylene-1-alkene copolymers with a range in inter-chain comonomer composition between 1 and about 15 mol % display an inversion of the crystallization rate in a range of melt temperatures where narrow copolymers show a continuous acceleration of the rate. With decreasing the initial melt temperature, broadly distributed copolymers show enhanced crystallization followed by a decrease of crystallization rate. The inversion demarcates the onset of liquid-liquid phase separation (LLPS) and a reduction of self-nuclei due to the strong thermodynamic drive for molecular segregation inside the binodal. The strong effect of melt memory on crystallization rate can be used to identify liquid-liquid phase separation in broadly distributed copolymers, and offers strategies to control the state of copolymer melts in ways of technological relevance for melt processing of LLDPE and other random olefin copolymers. References: B. O. Reid, et al., Macromolecules 46, 6485-6497, 2013 H. Gao, et al., Macromolecules 46, 6498-6506, 2013 A. Mamun et al., Macromolecules 47, 7958-7970, 2014 X. Chen et al., Macromol. Chem. Phys. 216, 1220 -1226, 2015 M. Ren et al., Macromol. Symp. 356, 131-141, 2015 Work supported by the NSF (DMR1105129).

  7. Telephone Peer Counseling of Breastfeeding Among WIC Participants: A Randomized Controlled Trial

    PubMed Central

    Joyce, Ted; Sibley, Kelly; Arnold, Diane; Altindag, Onur

    2014-01-01

    OBJECTIVE: The US Surgeon General has recommended that peer counseling to support breastfeeding become a core service of the Supplemental Nutrition Program for Women, Infants, and Children (WIC). As of 2008, 50% of WIC clients received services from local WIC agencies that offered peer counseling. Little is known about the effectiveness of these peer counseling programs. Randomized controlled trials of peer counseling interventions among low-income women in the United States showed increases in breastfeeding initiation and duration, but it is doubtful that the level of support provided could be scaled up to service WIC participants nationally. We tested whether a telephone peer counseling program among WIC participants could increase breastfeeding initiation, duration, and exclusivity. METHODS: We randomly assigned 1948 WIC clients recruited during pregnancy who intended to breastfeed or were considering breastfeeding to 3 study arms: no peer counseling, 4 telephone contacts, or 8 telephone contacts. RESULTS: We combined 2 treatment arms because there was no difference in the distribution of peer contacts. Nonexclusive breastfeeding duration was greater at 3 months postpartum for all women in the treatment group (adjusted relative risk: 1.22; 95% confidence interval [CI]: 1.10–1.34) but greater at 6 months for Spanish-speaking clients only (adjusted relative risk: 1.29; 95% CI: 1.10–1.51). The likelihood of exclusive breastfeeding cessation was less among Spanish-speaking clients (adjusted odds ratio: 0.78; 95% CI: 0.68–0.89). CONCLUSIONS: A telephone peer counseling program achieved gains in nonexclusive breastfeeding but modest improvements in exclusive breastfeeding were limited to Spanish- speaking women. PMID:25092936

  8. Scaling Laws for the Multidimensional Burgers Equation with Quadratic External Potential

    NASA Astrophysics Data System (ADS)

    Leonenko, N. N.; Ruiz-Medina, M. D.

    2006-07-01

    The reordering of the multidimensional exponential quadratic operator in coordinate-momentum space (see X. Wang, C.H. Oh and L.C. Kwek (1998). J. Phys. A.: Math. Gen. 31:4329-4336) is applied to derive an explicit formulation of the solution to the multidimensional heat equation with quadratic external potential and random initial conditions. The solution to the multidimensional Burgers equation with quadratic external potential under Gaussian strongly dependent scenarios is also obtained via the Hopf-Cole transformation. The limiting distributions of scaling solutions to the multidimensional heat and Burgers equations with quadratic external potential are then obtained under such scenarios.

  9. On the time arrows, and randomness in cosmological signals

    NASA Astrophysics Data System (ADS)

    Gurzadyan, V. G.; Sargsyan, S.; Yegorian, G.

    2013-09-01

    Arrows of time - thermodynamical, cosmological, electromagnetic, quantum mechanical, psychological - are basic properties of Nature. For a quantum system-bath closed system the de-correlated initial conditions and no-memory (Markovian) dynamics are outlined as necessary conditions for the appearance of the thermodynamical arrow. The emergence of the arrow for the system evolving according to non-unitary dynamics due to the presence of the bath, then, is a result of limited observability, and we conjecture the arrow in the observable Universe as determined by the dark sector acting as a bath. The voids in the large scale matter distribution induce hyperbolicity of the null geodesics, with possible observational consequences.

  10. Continuous Time Random Walks with memory and financial distributions

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Masoliver, Jaume

    2017-11-01

    We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.

  11. Emergence of Persistent Infection due to Heterogeneity

    NASA Astrophysics Data System (ADS)

    Agrawal, Vidit; Moitra, Promit; Sinha, Sudeshna

    2017-02-01

    We explore the emergence of persistent infection in a closed region where the disease progression of the individuals is given by the SIRS model, with an individual becoming infected on contact with another infected individual. We investigate the persistence of contagion qualitatively and quantitatively, under increasing heterogeneity in the partitioning of the population into different disease compartments, as well as increasing heterogeneity in the phases of the disease among individuals within a compartment. We observe that when the initial population is uniform, consisting of individuals at the same stage of disease progression, infection arising from a contagious seed does not persist. However when the initial population consists of randomly distributed refractory and susceptible individuals, a single source of infection can lead to sustained infection in the population, as heterogeneity facilitates the de-synchronization of the phases in the disease cycle of the individuals. We also show how the average size of the window of persistence of infection depends on the degree of heterogeneity in the initial composition of the population. In particular, we show that the infection eventually dies out when the entire initial population is susceptible, while even a few susceptibles among an heterogeneous refractory population gives rise to a large persistent infected set.

  12. The Miniaturization of the AFIT Random Noise Radar

    DTIC Science & Technology

    2013-03-01

    RANDOM NOISE RADAR I. Introduction Recent advances in technology and signal processing techniques have opened thedoor to using an ultra-wide band random...AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training

  13. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  14. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    NASA Astrophysics Data System (ADS)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  15. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  16. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  17. Recruitment and Retention for a Weight Loss Maintenance Trial Involving Weight Loss Prior to Randomization

    PubMed Central

    Grubber, J. M.; McVay, M. A.; Olsen, M. K.; Bolton, J.; Gierisch, J. M.; Taylor, S. S.; Maciejewski, M. L.; Yancy, W. S.

    2016-01-01

    Abstract Objective A weight loss maintenance trial involving weight loss prior to randomization is challenging to implement due to the potential for dropout and insufficient weight loss. We examined rates and correlates of non‐initiation, dropout, and insufficient weight loss during a weight loss maintenance trial. Methods The MAINTAIN trial involved a 16‐week weight loss program followed by randomization among participants losing at least 4 kg. Psychosocial measures were administered during a screening visit. Weight was obtained at the first group session and 16 weeks later to determine eligibility for randomization. Results Of 573 patients who screened as eligible, 69 failed to initiate the weight loss program. In adjusted analyses, failure to initiate was associated with lower age, lack of a support person, and less encouragement for making dietary changes. Among participants who initiated, 200 dropped out, 82 lost insufficient weight, and 222 lost sufficient weight for randomization. Compared to losing sufficient weight, dropping out was associated with younger age and tobacco use, whereas losing insufficient weight was associated with non‐White race and controlled motivation for physical activity. Conclusions Studies should be conducted to evaluate strategies to maximize recruitment and retention of subgroups that are less likely to initiate and be retained in weight loss maintenance trials. PMID:28090340

  18. Effects of initiating moderate wine intake on abdominal adipose tissue in adults with type 2 diabetes: a 2-year randomized controlled trial.

    PubMed

    Golan, Rachel; Shelef, Ilan; Shemesh, Elad; Henkin, Yaakov; Schwarzfuchs, Dan; Gepner, Yftach; Harman-Boehm, Ilana; Witkow, Shula; Friger, Michael; Chassidim, Yoash; Liberty, Idit F; Sarusi, Benjamin; Serfaty, Dana; Bril, Nitzan; Rein, Michal; Cohen, Noa; Ben-Avraham, Sivan; Ceglarek, Uta; Stumvoll, Michael; Blüher, Matthias; Thiery, Joachim; Stampfer, Meir J; Rudich, Assaf; Shai, Iris

    2017-02-01

    To generate evidence-based conclusions about the effect of wine consumption on weight gain and abdominal fat accumulation and distribution in patients with type 2 diabetes. In the 2-year randomized controlled CASCADE (CArdiovaSCulAr Diabetes & Ethanol) trial, patients following a Mediterranean diet were randomly assigned to drink 150 ml of mineral water, white wine or red wine with dinner for 2 years. Visceral adiposity and abdominal fat distribution were measured in a subgroup of sixty-five participants, using abdominal MRI. Ben-Gurion University of the Negev, Soroka-Medical Center and the Nuclear Research Center Negev, Israel. Alcohol-abstaining adults with well-controlled type 2 diabetes. Forty-eight participants (red wine, n 27; mineral water, n 21) who completed a second MRI measurement were included in the 2-year analysis. Similar weight losses (sd) were observed: red wine 1·3 (3·9) kg; water 1·0 (4·2) kg (P=0·8 between groups). Changes (95 % CI) in abdominal adipose-tissue distribution were similar: red wine, visceral adipose tissue (VAT) -3·0 (-8·0, 2·0) %, deep subcutaneous adipose tissue (DSAT) +5·2 (-1·1, 11·6) %, superficial subcutaneous adipose tissue (SSAT) -1·9 (-5·0, 1·2) %; water, VAT -3·2 (-8·9, 2·5) %, DSAT +2·9 (-2·8, 8·6) %, SSAT -0·15 (-3·3, 2·9) %. No changes in antidiabetic medication and no substantial changes in energy intake (+126 (sd 2889) kJ/d (+30·2 (sd 690) kcal/d), P=0·8) were recorded. A 2-year decrease in glycated Hb (β=0·28, P=0·05) was associated with a decrease in VAT. Moderate wine consumption, as part of a Mediterranean diet, in persons with controlled diabetes did not promote weight gain or abdominal adiposity.

  19. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  20. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  1. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  2. In silico study on the effects of matrix structure in controlled drug release

    NASA Astrophysics Data System (ADS)

    Villalobos, Rafael; Cordero, Salomón; Maria Vidales, Ana; Domínguez, Armando

    2006-07-01

    Purpose: To study the effects of drug concentration and spatial distribution of the medicament, in porous solid dosage forms, on the kinetics and total yield of drug release. Methods: Cubic networks are used as models of drug release systems. They were constructed by means of the dual site-bond model framework, which allows a substrate to have adequate geometrical and topological distribution of its pore elements. Drug particles can move inside the networks by following a random walk model with excluded volume interactions between the particles. The drug release time evolution for different drug concentration and different initial drug spatial distribution has been monitored. Results: The numerical results show that in all the studied cases, drug release presents an anomalous behavior, and the consequences of the matrix structural properties, i.e., drug spatial distribution and drug concentration, on the drug release profile have been quantified. Conclusions: The Weibull function provides a simple connection between the model parameters and the microstructure of the drug release device. A critical modeling of drug release from matrix-type delivery systems is important in order to understand the transport mechanisms that are implicated, and to predict the effect of the device design parameters on the release rate.

  3. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  4. Cross-domain active learning for video concept detection

    NASA Astrophysics Data System (ADS)

    Li, Huan; Li, Chao; Shi, Yuan; Xiong, Zhang; Hauptmann, Alexander G.

    2011-08-01

    As video data from a variety of different domains (e.g., news, documentaries, entertainment) have distinctive data distributions, cross-domain video concept detection becomes an important task, in which one can reuse the labeled data of one domain to benefit the learning task in another domain with insufficient labeled data. In this paper, we approach this problem by proposing a cross-domain active learning method which iteratively queries labels of the most informative samples in the target domain. Traditional active learning assumes that the training (source domain) and test data (target domain) are from the same distribution. However, it may fail when the two domains have different distributions because querying informative samples according to a base learner that initially learned from source domain may no longer be helpful for the target domain. In our paper, we use the Gaussian random field model as the base learner which has the advantage of exploring the distributions in both domains, and adopt uncertainty sampling as the query strategy. Additionally, we present an instance weighting trick to accelerate the adaptability of the base learner, and develop an efficient model updating method which can significantly speed up the active learning process. Experimental results on TRECVID collections highlight the effectiveness.

  5. Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.

    2018-05-01

    Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.

  6. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  7. Solar wind driving and substorm triggering

    NASA Astrophysics Data System (ADS)

    Newell, Patrick T.; Liou, Kan

    2011-03-01

    We compare solar wind driving and its changes for three data sets: (1) 4861 identifications of substorm onsets from satellite global imagers (Polar UVI and IMAGE FUV); (2) a similar number of otherwise random times chosen with a similar solar wind distribution (slightly elevated driving); (3) completely random times. Multiple measures of solar wind driving were used, including interplanetary magnetic field (IMF) Bz, the Kan-Lee electric field, the Borovsky function, and dΦMP/dt (all of which estimate dayside merging). Superposed epoch analysis verifies that the mean Bz has a northward turning (or at least averages less southward) starting 20 min before onset. We argue that the delay between IMF impact on the magnetopause and tail effects appearing in the ionosphere is about that long. The northward turning is not the effect of a few extreme events. The median field shows the same result, as do all other measures of solar wind driving. We compare the rate of northward turning to that observed after random times with slightly elevated driving. The subsequent reversion to mean is essentially the same between random elevations and substorms. To further verify this, we consider in detail the distribution of changes from the statistical peak (20 min prior to onset) to onset. For Bz, the mean change after onset is +0.14 nT (i.e., IMF becomes more northward), but the standard deviation is σ = 2.8 nT. Thus large changes in either direction are common. For EKL, the change is -15 nT km/s ± 830 nT km/s. Thus either a hypothesis predicting northward turnings or one predicting southward turnings would find abundant yet random confirming examples. Indeed, applying the Lyons et al. (1997) trigger criteria (excluding only the prior requirement of 22/30 min Bz < 0, which is often not valid for actual substorms) to these three sets of data shows that "northward turning triggers" occur in 23% of the random data, 24% of the actual substorms, and after 27% of the random elevations. These results strongly support the idea of Morley and Freeman (2007), that substorms require initial elevated solar wind driving, but that there is no evidence for external triggering. Finally dynamic pressure, p, and velocity, v, show no meaningful variation around onset (although p averages 10% above an 11 year mean).

  8. Dynamic Simulation of Random Packing of Polydispersive Fine Particles

    NASA Astrophysics Data System (ADS)

    Ferraz, Carlos Handrey Araujo; Marques, Samuel Apolinário

    2018-02-01

    In this paper, we perform molecular dynamic (MD) simulations to study the two-dimensional packing process of both monosized and random size particles with radii ranging from 1.0 to 7.0 μm. The initial positions as well as the radii of five thousand fine particles were defined inside a rectangular box by using a random number generator. Both the translational and rotational movements of each particle were considered in the simulations. In order to deal with interacting fine particles, we take into account both the contact forces and the long-range dispersive forces. We account for normal and static/sliding tangential friction forces between particles and between particle and wall by means of a linear model approach, while the long-range dispersive forces are computed by using a Lennard-Jones-like potential. The packing processes were studied assuming different long-range interaction strengths. We carry out statistical calculations of the different quantities studied such as packing density, mean coordination number, kinetic energy, and radial distribution function as the system evolves over time. We find that the long-range dispersive forces can strongly influence the packing process dynamics as they might form large particle clusters, depending on the intensity of the long-range interaction strength.

  9. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  10. Dynamical Evolution of Planetesimals in the Outer Solar System. II. The Saturn/Uranus and Uranus/Neptune Zones

    NASA Astrophysics Data System (ADS)

    Grazier, Kevin R.; Newman, William I.; Varadi, Ferenc; Kaula, William M.; Hyman, James M.

    1999-08-01

    We report on numerical simulations exploring the dynamical stability of planetesimals in the gaps between the outer Solar System planets. We search for stable niches in the Saturn/Uranus and Uranus/Neptune zones by employing 10,000 massless particles-many more than previous studies in these two zones-using high-order optimized multistep integration schemes coupled with roundoff error minimizing methods. An additional feature of this study, differing from its predecessors, is the fact that our initial distributions contain particles on orbits which are both inclined and noncircular. These initial distributions were also Gaussian distributed such that the Gaussian peaks were at the midpoint between the neighboring perturbers. The simulations showed an initial transient phase where the bulk of the primordial planetesimal swarm was removed from the Solar System within 105 years. This is about 10 times longer than we observed in our previous Jupiter/Saturn studies. Next, there was a gravitational relaxation phase where the particles underwent a random walk in momentum space and were exponentially eliminated by random encounters with the planets. Unlike our previous Jupiter/Saturn simulation, the particles did not fully relax into a third Lagrangian niche phase where long-lived particles are at Lagrange points or stable niches. This is either because the Lagrangian niche phase never occurs or because these simulations did not have enough particles for this third phase to manifest. In these simulations, there was a general trend for the particles to migrate outward and eventually to be cleared out by the outermost planet in the zone. We confirmed that particles with higher eccentricities had shorter lifetimes and that the resonances between the jovian planets "pumped up" the eccentricities of the planetesimals with low-inclination orbits more than those with higher inclinations. We estimated the expected lifetime of particles using kinetic theory and even though the time scale of the Uranus/Neptune simulation was 380 times longer than our previous Jupiter/Saturn simulation, the planetesimals in the Uranus/Neptune zone were cleared out more quickly than those in the Saturn/Uranus zone because of the positions of resonances with the jovian planets. These resonances had an even greater effect than random gravitational stirring in the winnowing process and confirm that all the jovian planets are necessary in long simulations. Even though we observed several long-lived zones near 12.5, 14.4, 16, 24.5, and 26 AU, only two particles remained at the end of the 109-year integration: one near the 2 : 3 Saturn resonance, and the other near the Neptune 1 : 1 resonance. This suggests that niches for planetesimal material in the jovian planets are rare and may exist either only in extremely narrow bands or in the neighborhoods of the triangular Lagrange points of the outer planets.

  11. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  12. The triglyceride composition of 17 seed fats rich in octanoic, decanoic, or lauric acid.

    PubMed

    Litchfield, C; Miller, E; Harlow, R D; Reiser, R

    1967-07-01

    Seed fats of eight species ofLauraceae (laurel family), six species ofCuphea (Lythraceae family), and three species ofUlmaceae (elm family) were extracted, and the triglycerides were isolated by preparative thin-layer chromatography. GLC of the triglycerides on a silicone column resolved 10 to 18 peaks with a 22 to 58 carbon number range for each fat. These carbon number distributions yielded considerable information about triglyceride compositions of the fats.The most interesting finding was withLaurus nobilis seed fat, which contained 58.4% lauric acid and 29.2-29.8% trilaurin. A maximum of 19.9% trilaurin would be predicted by a 1, 2, 3-random, a 1, 3-random-2-random, or a 1-random-2-random-3-random distribution of the lauric acid(3). This indicates a specificity for the biosynthesis of a simple triglyceride byLaurus nobilis seed enzymes.Cuphea lanceolata seed fat also contained more simple triglyceride (tridecanoin) than would be predicted by the fatty acid distribution theories.

  13. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  14. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  15. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  16. Colonization of weakened trees by mass-attacking bark beetles: no penalty for pioneers, scattered initial distributions and final regular patterns

    PubMed Central

    Gabriel, Edith; Louis, Marceau; Deneubourg, Jean-Louis; Grégoire, Jean-Claude

    2018-01-01

    Bark beetles use aggregation pheromones to promote group foraging, thus increasing the chances of an individual to find a host and, when relevant, to overwhelm the defences of healthy trees. When a male beetle finds a suitable host, it releases pheromones that attract potential mates as well as other ‘spying’ males, which result in aggregations on the new host. To date, most studies have been concerned with the use of aggregation pheromones by bark beetles to overcome the defences of living, well-protected trees. How insects behave when facing undefended or poorly defended hosts remains largely unknown. The spatio-temporal pattern of resource colonization by the European eight-toothed spruce bark beetle, Ips typographus, was quantified when weakly defended hosts (fallen trees) were attacked. In many of the replicates, colonization began with the insects rapidly scattering over the available surface and then randomly filling the gaps until a regular distribution was established, which resulted in a constant decrease in nearest-neighbour distances to a minimum below which attacks were not initiated. The scattered distribution of the first attacks suggested that the trees were only weakly defended. A minimal theoretical distance of 2.5 cm to the earlier settlers (corresponding to a density of 3.13 attacks dm−2) was calculated, but the attack density always remained lower, between 0.4 and 1.2 holes dm−2, according to our observations. PMID:29410791

  17. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  18. Topology for Dominance for Network of Multi-Agent System

    NASA Astrophysics Data System (ADS)

    Szeto, K. Y.

    2007-05-01

    The resource allocation problem in evolving two-dimensional point patterns is investigated for the existence of good strategies for the construction of initial configuration that leads to fast dominance of the pattern by one single species, which can be interpreted as market dominance by a company in the context of multi-agent systems in econophysics. For hexagonal lattice, certain special topological arrangements of the resource in two-dimensions, such as rings, lines and clusters have higher probability of dominance, compared to random pattern. For more complex networks, a systematic way to search for a stable and dominant strategy of resource allocation in the changing environment is found by means of genetic algorithm. Five typical features can be summarized by means of the distribution function for the local neighborhood of friends and enemies as well as the local clustering coefficients: (1) The winner has more triangles than the loser has. (2) The winner likes to form clusters as the winner tends to connect with other winner rather than with losers; while the loser tends to connect with winners rather than losers. (3) The distribution function of friends as well as enemies for the winner is broader than the corresponding distribution function for the loser. (4) The connectivity at which the peak of the distribution of friends for the winner occurs is larger than that of the loser; while the peak values for friends for winners is lower. (5) The connectivity at which the peak of the distribution of enemies for the winner occurs is smaller than that of the loser; while the peak values for enemies for winners is lower. These five features appear to be general, at least in the context of two-dimensional hexagonal lattices of various sizes, hierarchical lattice, Voronoi diagrams, as well as high-dimensional random networks. These general local topological properties of networks are relevant to strategists aiming at dominance in evolving patterns when the interaction between the agents is local.

  19. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    NASA Astrophysics Data System (ADS)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  20. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  1. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  2. Universal Hitting Time Statistics for Integrable Flows

    NASA Astrophysics Data System (ADS)

    Dettmann, Carl P.; Marklof, Jens; Strömbergsson, Andreas

    2017-02-01

    The perceived randomness in the time evolution of "chaotic" dynamical systems can be characterized by universal probabilistic limit laws, which do not depend on the fine features of the individual system. One important example is the Poisson law for the times at which a particle with random initial data hits a small set. This was proved in various settings for dynamical systems with strong mixing properties. The key result of the present study is that, despite the absence of mixing, the hitting times of integrable flows also satisfy universal limit laws which are, however, not Poisson. We describe the limit distributions for "generic" integrable flows and a natural class of target sets, and illustrate our findings with two examples: the dynamics in central force fields and ellipse billiards. The convergence of the hitting time process follows from a new equidistribution theorem in the space of lattices, which is of independent interest. Its proof exploits Ratner's measure classification theorem for unipotent flows, and extends earlier work of Elkies and McMullen.

  3. Cancerous tumor: the high frequency of a rare event.

    PubMed

    Galam, S; Radomski, J P

    2001-05-01

    A simple model for cancer growth is presented using cellular automata. Cells diffuse randomly on a two-dimensional square lattice. Individual cells can turn cancerous at a very low rate. During each diffusive step, local fights may occur between healthy and cancerous cells. Associated outcomes depend on some biased local rules, which are independent of the overall cancerous cell density. The models unique ingredients are the frequency of local fights and the bias amplitude. While each isolated cancerous cell is eventually destroyed, an initial two-cell tumor cluster is found to have a nonzero probabilty to spread over the whole system. The associated phase diagram for survival or death is obtained as a function of both the rate of fight and the bias distribution. Within the model, although the occurrence of a killing cluster is a very rare event, it turns out to happen almost systematically over long periods of time, e.g., on the order of an adults life span. Thus, after some age, survival from tumorous cancer becomes random.

  4. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  5. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  6. Transport and Reactive Flow Modelling Using A Particle Tracking Method Based on Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Oliveira, R.; Bijeljic, B.; Blunt, M. J.; Colbourne, A.; Sederman, A. J.; Mantle, M. D.; Gladden, L. F.

    2017-12-01

    Mixing and reactive processes have a large impact on the viability of enhanced oil and gas recovery projects that involve acid stimulation and CO2 injection. To achieve a successful design of the injection schemes an accurate understanding of the interplay between pore structure, flow and reactive transport is necessary. Dependent on transport and reactive conditions, this complex coupling can also be dependent on initial rock heterogeneity across a variety of scales. To address these issues, we devise a new method to study transport and reactive flow in porous media at multiple scales. The transport model is based on an efficient Particle Tracking Method based on Continuous Time Random Walks (CTRW-PTM) on a lattice. Transport is modelled using an algorithm described in Rhodes and Blunt (2006) and Srinivasan et al. (2010); this model is expanded to enable for reactive flow predictions in subsurface rock undergoing a first-order fluid/solid chemical reaction. The reaction-induced alteration in fluid/solid interface is accommodated in the model through changes in porosity and flow field, leading to time dependent transport characteristics in the form of transit time distributions which account for rock heterogeneity change. This also enables the study of concentration profiles at the scale of interest. Firstly, we validate transport model by comparing the probability of molecular displacement (propagators) measured by Nuclear Magnetic Resonance (NMR) with our modelled predictions for concentration profiles. The experimental propagators for three different porous media of increasing complexity, a beadpack, a Bentheimer sandstone and a Portland carbonate, show a good agreement with the model. Next, we capture the time evolution of the propagators distribution in a reactive flow experiment, where hydrochloric acid is injected into a limestone rock. We analyse the time-evolving non-Fickian signatures for the transport during reactive flow and observe an increase in transport heterogeneity at latter times, representing the increase in rock heterogeneity. Evolution of transit time distribution is associated with the evolution of concentration profiles, thus highlighting the impact of initial rock structure on the reactive transport for a range of Pe and Da numbers.

  7. New Quantum Key Distribution Scheme Based on Random Hybrid Quantum Channel with EPR Pairs and GHZ States

    NASA Astrophysics Data System (ADS)

    Yan, Xing-Yu; Gong, Li-Hua; Chen, Hua-Ying; Zhou, Nan-Run

    2018-05-01

    A theoretical quantum key distribution scheme based on random hybrid quantum channel with EPR pairs and GHZ states is devised. In this scheme, EPR pairs and tripartite GHZ states are exploited to set up random hybrid quantum channel. Only one photon in each entangled state is necessary to run forth and back in the channel. The security of the quantum key distribution scheme is guaranteed by more than one round of eavesdropping check procedures. It is of high capacity since one particle could carry more than two bits of information via quantum dense coding.

  8. Atom probe study of vanadium interphase precipitates and randomly distributed vanadium precipitates in ferrite.

    PubMed

    Nöhrer, M; Zamberger, S; Primig, S; Leitner, H

    2013-01-01

    Atom probe tomography and transmission electron microscopy were used to examine the precipitation reaction in the austenite and ferrite phases in vanadium micro-alloyed steel after a thermo-mechanical process. It was observed that only in the ferrite phase precipitates could be found, whereupon two different types were detected. Thus, the aim was to reveal the difference between these two types. The first type was randomly distributed precipitates from V supersaturated ferrite and the second type V interphase precipitates. Not only the arrangement of the particles was different also the chemical composition. The randomly distributed precipitates consisted of V, C and N in contrast to that the interphase precipitates showed a composition of V, C and Mn. Furthermore the randomly distributed precipitates had maximum size of 20 nm and the interphase precipitates a maximum size of 15 nm. It was assumed that the reason for these differences is caused by the site in which they were formed. The randomly distributed precipitates were formed in a matrix consisting mainly of 0.05 at% C, 0.68 at% Si, 0.03 at% N, 0.145 at% V and 1.51 at% Mn. The interphase precipitates were formed in a region with a much higher C, Mn and V content. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  10. Slow movement resistance training using body weight improves muscle mass in the elderly: A randomized controlled trial.

    PubMed

    Tsuzuku, S; Kajioka, T; Sakakibara, H; Shimaoka, K

    2018-04-01

    To examine the effect of a 12-week slow movement resistance training using body weight as a load (SRT-BW) on muscle mass, strength, and fat distribution in healthy elderly people. Fifty-three men and 35 women aged 70 years old or older without experience in resistance training participated, and they were randomly assigned to a SRT-BW group or control group. The control group did not receive any intervention, but participants in this group underwent a repeat measurement 12 weeks later. The SRT-BW program consisted of 3 different exercises (squat, tabletop push-up, and sit-up), which were designed to stimulate anterior major muscles. Initially, these exercises were performed by 2 sets of 10 repetitions, and subsequently, the number of repetitions was increased progressively by 2 repetitions every 4 weeks. Participants were instructed to perform each eccentric and concentric phase of movement slowly (spending 4 seconds on each movement), covering the full range of motion. We evaluated muscle mass, strength, and fat distribution at baseline and after 12 weeks of training. Changes over 12 weeks were significantly greater in the SRT-BW group than in the control group, with a decrease in waist circumference, hip circumference, and abdominal preperitoneal and subcutaneous fat thickness, and an increase in thigh muscle thickness, knee extension strength, and hip flexion strength. In conclusion, relatively short-term SRT-BW was effective in improving muscle mass, strength, and fat distribution in healthy elderly people. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Radial alignment of elliptical galaxies by the tidal force of a cluster of galaxies

    NASA Astrophysics Data System (ADS)

    Rong, Yu; Yi, Shu-Xu; Zhang, Shuang-Nan; Tu, Hong

    2015-08-01

    Unlike the random radial orientation distribution of field elliptical galaxies, galaxies in a cluster are expected to point preferentially towards the centre of the cluster, as a result of the cluster's tidal force on its member galaxies. In this work, an analytic model is formulated to simulate this effect. The deformation time-scale of a galaxy in a cluster is usually much shorter than the time-scale of change of the tidal force; the dynamical process of tidal interaction within the galaxy can thus be ignored. The equilibrium shape of a galaxy is then assumed to be the surface of equipotential that is the sum of the self-gravitational potential of the galaxy and the tidal potential of the cluster at this location. We use a Monte Carlo method to calculate the radial orientation distribution of cluster galaxies, by assuming a Navarro-Frenk-White mass profile for the cluster and the initial ellipticity of field galaxies. The radial angles show a single-peak distribution centred at zero. The Monte Carlo simulations also show that a shift of the reference centre from the real cluster centre weakens the anisotropy of the radial angle distribution. Therefore, the expected radial alignment cannot be revealed if the distribution of spatial position angle is used instead of that of radial angle. The observed radial orientations of elliptical galaxies in cluster Abell 2744 are consistent with the simulated distribution.

  12. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  13. Two Models of Time Constrained Target Travel between Two Endpoints Constructed by the Application of Brownian Motion and a Random Tour.

    DTIC Science & Technology

    1983-03-01

    the Naval Postgraduate School. As my *advisor, Prof. Gaver suggested and derived the Brownian bridge, as well as nudged me in the right direction when...the * random tour process by deriving the mean square radial distance for a random tour with arbitrary course change distribution to be: EECR I2(V / 2...random tour model, li = Iy = 8, and equation (3)x y results as expected. The notion of an arbitrary course change distribution is important because the

  14. Anonymous authenticated communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A

    2007-06-19

    A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.

  15. Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.

    PubMed

    Martin, O C; Sulc, P

    2010-03-01

    We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.

  16. Early Surgery versus Initial Conservative Treatment in Patients with Traumatic Intracerebral Hemorrhage (STITCH[Trauma]): The First Randomized Trial

    PubMed Central

    Mendelow, A. David; Rowan, Elise N.; Francis, Richard; McColl, Elaine; McNamee, Paul; Chambers, Iain R.; Unterberg, Andreas; Boyers, Dwayne; Mitchell, Patrick M.

    2015-01-01

    Abstract Intraparenchymal hemorrhages occur in a proportion of severe traumatic brain injury TBI patients, but the role of surgery in their treatment is unclear. This international multi-center, patient-randomized, parallel-group trial compared early surgery (hematoma evacuation within 12 h of randomization) with initial conservative treatment (subsequent evacuation allowed if deemed necessary). Patients were randomized using an independent randomization service within 48 h of TBI. Patients were eligible if they had no more than two intraparenchymal hemorrhages of 10 mL or more and did not have an extradural or subdural hematoma that required surgery. The primary outcome measure was the traditional dichotomous split of the Glasgow Outcome Scale obtained by postal questionnaires sent directly to patients at 6 months. The trial was halted early by the UK funding agency (NIHR HTA) for failure to recruit sufficient patients from the UK (trial registration: ISRCTN19321911). A total of 170 patients were randomized from 31 of 59 registered centers worldwide. Of 82 patients randomized to early surgery with complete follow-up, 30 (37%) had an unfavorable outcome. Of 85 patients randomized to initial conservative treatment with complete follow-up, 40 (47%) had an unfavorable outcome (odds ratio, 0.65; 95% confidence interval, CI 0.35, 1.21; p=0.17), with an absolute benefit of 10.5% (CI, −4.4–25.3%). There were significantly more deaths in the first 6 months in the initial conservative treatment group (33% vs. 15%; p=0.006). The 10.5% absolute benefit with early surgery was consistent with the initial power calculation. However, with the low sample size resulting from the premature termination, we cannot exclude the possibility that this could be a chance finding. A further trial is required urgently to assess whether this encouraging signal can be confirmed. PMID:25738794

  17. Percolation Laws of a Fractal Fracture-Pore Double Medium

    NASA Astrophysics Data System (ADS)

    Zhao, Yangsheng; Feng, Zengchao; Lv, Zhaoxing; Zhao, Dong; Liang, Weiguo

    2016-12-01

    The fracture-pore double porosity medium is one of the most common media in nature, for example, rock mass in strata. Fracture has a more significant effect on fluid flow than a pore in a fracture-pore double porosity medium. Hence, the fracture effect on percolation should be considered when studying the percolation phenomenon in porous media. In this paper, based on the fractal distribution law, three-dimensional (3D) fracture surfaces, and two-dimensional (2D) fracture traces in rock mass, the locations of fracture surfaces or traces are determined using a random function of uniform distribution. Pores are superimposed to build a fractal fracture-pore double medium. Numerical experiments were performed to show percolation phenomena in the fracture-pore double medium. The percolation threshold can be determined from three independent variables (porosity n, fracture fractal dimension D, and initial value of fracture number N0). Once any two are determined, the percolation probability exists at a critical point with the remaining parameter changing. When the initial value of the fracture number is greater than zero, the percolation threshold in the fracture-pore medium is much smaller than that in a pore medium. When the fracture number equals zero, the fracture-pore medium degenerates to a pore medium, and both percolation thresholds are the same.

  18. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  19. Delivering successful randomized controlled trials in surgery: Methods to optimize collaboration and study design.

    PubMed

    Blencowe, Natalie S; Cook, Jonathan A; Pinkney, Thomas; Rogers, Chris; Reeves, Barnaby C; Blazeby, Jane M

    2017-04-01

    Randomized controlled trials in surgery are notoriously difficult to design and conduct due to numerous methodological and cultural challenges. Over the last 5 years, several UK-based surgical trial-related initiatives have been funded to address these issues. These include the development of Surgical Trials Centers and Surgical Specialty Leads (individual surgeons responsible for championing randomized controlled trials in their specialist fields), both funded by the Royal College of Surgeons of England; networks of research-active surgeons in training; and investment in methodological research relating to surgical randomized controlled trials (to address issues such as recruitment, blinding, and the selection and standardization of interventions). This article discusses these initiatives more in detail and provides exemplar cases to illustrate how the methodological challenges have been tackled. The initiatives have surpassed expectations, resulting in a renaissance in surgical research throughout the United Kingdom, such that the number of patients entering surgical randomized controlled trials has doubled.

  20. Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Xu, Hebing; Li, Chao

    2018-03-01

    Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.

  1. Weed species composition and distribution pattern in the maize crop under the influence of edaphic factors and farming practices: A case study from Mardan, Pakistan.

    PubMed

    Ahmad, Zeeshan; Khan, Shujaul Mulk; Abd Allah, Elsayed Fathi; Alqarawi, Abdulaziz Abdullah; Hashem, Abeer

    2016-11-01

    Weeds are unwanted plant species growing in ordinary environment. In nature there are a total of 8000 weed species out of which 250 are important for agriculture world. The present study was carried out on weed species composition and distribution pattern with special reference to edaphic factor and farming practices in maize crop of District Mardan during the months of August and September, 2014. Quadrates methods were used to assess weed species distribution in relation to edaphic factor and farming practices. Phytosociological attributes such as frequency, relative frequency, density, relative density and Importance Values were measured by placing 9 quadrates (1 × 1 m 2 ) randomly in each field. Initial results showed that the study area has 29 diverse weed species belonging to 27 genera and 15 families distributed in 585 quadrats. Presence and absence data sheet of 29 weed species and 65 fields were analyzed through PC-ORD version 5. Cluster and Two Way Cluster Analyses initiated four different weed communities with significant indicator species and with respect to underlying environmental variables using data attribute plots. Canonical Correspondence Analyses (CCA) of CANOCO software version 4.5 was used to assess the environmental gradients of weed species. It is concluded that among all the edaphic factors the strongest variables were higher concentration of potassium, organic matter and sandy nature of soil. CCA plots of both weed species and sampled fields based on questionnaire data concluded the farming practices such as application of fertilizers, irrigation and chemical spray were the main factors in determination of weed communities.

  2. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  3. On random field Completely Automated Public Turing Test to Tell Computers and Humans Apart generation.

    PubMed

    Kouritzin, Michael A; Newton, Fraser; Wu, Biao

    2013-04-01

    Herein, we propose generating CAPTCHAs through random field simulation and give a novel, effective and efficient algorithm to do so. Indeed, we demonstrate that sufficient information about word tests for easy human recognition is contained in the site marginal probabilities and the site-to-nearby-site covariances and that these quantities can be embedded directly into certain conditional probabilities, designed for effective simulation. The CAPTCHAs are then partial random realizations of the random CAPTCHA word. We start with an initial random field (e.g., randomly scattered letter pieces) and use Gibbs resampling to re-simulate portions of the field repeatedly using these conditional probabilities until the word becomes human-readable. The residual randomness from the initial random field together with the random implementation of the CAPTCHA word provide significant resistance to attack. This results in a CAPTCHA, which is unrecognizable to modern optical character recognition but is recognized about 95% of the time in a human readability study.

  4. A Blinded Randomized Controlled Trial of Motivational Interviewing to Improve Adherence with Osteoporosis Medications: Design of the OPTIMA Trial

    PubMed Central

    Solomon, Daniel H.; Gleeson, Timothy; Iversen, Maura; Avorn, Jerome; Brookhart, M. Alan; Lii, Joyce; Losina, Elena; May, Frank; Patrick, Amanda; Shrank, William H.; Katz, Jeffrey N.

    2010-01-01

    Purpose While many effective treatments exist for osteoporosis, most people do not adhere to such treatments long-term. No proven interventions exist to improve osteoporosis medication adherence. We report here on the design and initial enrollment in an innovative randomized controlled trial aimed at improving adherence to osteoporosis treatments. Methods The trial represents a collaboration between academic researchers and a state-run pharmacy benefits program for low-income older adults. Beneficiaries beginning treatment with a medication for osteoporosis are targeted for recruitment. We randomize consenting individuals to receive 12-months of mailed education (control arm) or an intervention consisting of one-on-one telephone-based counseling and the mailed education. Motivational Interviewing forms the basis for the counseling program which is delivered by seven trained and supervised health counselors over ten telephone calls. The counseling sessions include scripted dialogue, open-ended questions about medication adherence and its barriers, as well as structured questions. The primary endpoint of the trial is medication adherence measured over the 12-month intervention period. Secondary endpoints include fractures, nursing home admissions, health care resource utilization, and mortality. Results During the first 7 months of recruitment, we have screened 3,638 potentially eligible subjects. After an initial mailing, 1,115 (30.6%) opted out of telephone recruitment and 1,019 (28.0%) could not be successfully contacted. Of the remaining, 879 (24.2%) consented to participate and were randomized. Women comprise over 90% of all groups, mean ages range from 77–80 years old, and the majority in all groups was white. The distribution of osteoporosis medications was comparable across groups and the median number of different prescription drugs used in the prior year was 8–10. Conclusions We have developed a novel intervention for improving osteoporosis medication adherence. The intervention is currently being tested in a large scale randomized controlled trial. If successful, the intervention may represent a useful model for improving adherence to other chronic treatments. PMID:19436935

  5. Random walk to a nonergodic equilibrium concept

    NASA Astrophysics Data System (ADS)

    Bel, G.; Barkai, E.

    2006-01-01

    Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.

  6. Anisotropy Induced Switching Field Distribution in High-Density Patterned Media

    NASA Astrophysics Data System (ADS)

    Talapatra, A.; Mohanty, J.

    We present here micromagnetic study of variation of switching field distribution (SFD) in a high-density patterned media as a function of magnetic anisotropy of the system. We consider the manifold effect of magnetic anisotropy in terms of its magnitude, tilt in anisotropy axis and random arrangements of magnetic islands with random anisotropy values. Our calculation shows that reduction in anisotropy causes linear decrease in coercivity because the anisotropy energy tries to align the spins along a preferred crystallographic direction. Tilt in anisotropy axis results in decrease in squareness of the hysteresis loop and hence facilitates switching. Finally, the experimental challenges like lithographic distribution of magnetic islands, their orientation, creation of defects, etc. demanded the distribution of anisotropy to be random along with random repetitions. We have explained that the range of anisotropy values and the number of bits with different anisotropy play a key role over SFD, whereas the position of the bits and their repetitions do not show a considerable contribution.

  7. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  8. Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions

    PubMed Central

    König, Sandra; Schauer, Stefan

    2016-01-01

    Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572

  9. Biological monitoring of environmental quality: The use of developmental instability

    USGS Publications Warehouse

    Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.

    1994-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.

  10. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  11. On the genealogy of branching random walks and of directed polymers

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2016-08-01

    It is well known that the mean-field theory of directed polymers in a random medium exhibits replica symmetry breaking with a distribution of overlaps which consists of two delta functions. Here we show that the leading finite-size correction to this distribution of overlaps has a universal character which can be computed explicitly. Our results can also be interpreted as genealogical properties of branching Brownian motion or of branching random walks.

  12. Modeling of Global BEAM Structure for Evaluation of MMOD Impacts to Support Development of a Health Monitoring System

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Vassilakos, Gregory J.

    2015-01-01

    This report summarizes the initial modeling of the global response of the Bigelow Expandable Activity Module (BEAM) to micrometeorite and orbital debris(MMOD) impacts using a structural, nonlinear, transient dynamic, finite element code. These models complement the on-orbit deployment of the Distributed Impact Detection System (DIDS) to support structural health monitoring studies. Two global models were developed. The first focused exclusively on impacts on the soft-goods (fabric-envelop) portion of BEAM. The second incorporates the bulkhead to support understanding of bulkhead impacts. These models were exercised for random impact locations and responses monitored at the on-orbit sensor locations. The report concludes with areas for future study.

  13. Effects of diversity on multiagent systems: Minority games

    NASA Astrophysics Data System (ADS)

    Wong, K. Y. Michael; Lim, S. W.; Gao, Zhuo

    2005-06-01

    We consider a version of large population games whose agents compete for resources using strategies with adaptable preferences. The games can be used to model economic markets, ecosystems, or distributed control. Diversity of initial preferences of strategies is introduced by randomly assigning biases to the strategies of different agents. We find that diversity among the agents reduces their maladaptive behavior. We find interesting scaling relations with diversity for the variance and other parameters such as the convergence time, the fraction of fickle agents, and the variance of wealth, illustrating their dynamical origin. When diversity increases, the scaling dynamics is modified by kinetic sampling and waiting effects. Analyses yield excellent agreement with simulations.

  14. Exact solutions for mass-dependent irreversible aggregations.

    PubMed

    Son, Seung-Woo; Christensen, Claire; Bizhani, Golnoosh; Grassberger, Peter; Paczuski, Maya

    2011-10-01

    We consider the mass-dependent aggregation process (k+1)X→X, given a fixed number of unit mass particles in the initial state. One cluster is chosen proportional to its mass and is merged into one, either with k neighbors in one dimension, or--in the well-mixed case--with k other clusters picked randomly. We find the same combinatorial exact solutions for the probability to find any given configuration of particles on a ring or line, and in the well-mixed case. The mass distribution of a single cluster exhibits scaling laws and the finite-size scaling form is given. The relation to the classical sum kernel of irreversible aggregation is discussed.

  15. Nonresonant interaction of heavy ions with electromagnetic ion cyclotron waves

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Gendrin, R.

    1985-01-01

    The motion of a heavy ion in the presence of an intense ultralow-frequency electromagnetic wave propagating along the dc magnetic field is analyzed. Starting from the basic equations of motion and from their associated two invariants, the heavy ion velocity-space trajectories are drawn. It is shown that after a certain time, particles whose initial phase angles are randomly distributed tend to bunch together, provided that the wave intensity b-sub-1 is sufficiently large. The importance of these results for the interpretation of the recently observed acceleration of singly charged He ions in conjunction with the occurrence of large-amplitude ion cyclotron waves in the equatorial magnetosphere is discussed.

  16. Nonequilibrium Precondensation of Classical Waves in Two Dimensions Propagating through Atomic Vapors

    NASA Astrophysics Data System (ADS)

    Šantić, Neven; Fusaro, Adrien; Salem, Sabeur; Garnier, Josselin; Picozzi, Antonio; Kaiser, Robin

    2018-02-01

    The nonlinear Schrödinger equation, used to describe the dynamics of quantum fluids, is known to be valid not only for massive particles but also for the propagation of light in a nonlinear medium, predicting condensation of classical waves. Here we report on the initial evolution of random waves with Gaussian statistics using atomic vapors as an efficient two dimensional nonlinear medium. Experimental and theoretical analysis of near field images reveal a phenomenon of nonequilibrium precondensation, characterized by a fast relaxation towards a precondensate fraction of up to 75%. Such precondensation is in contrast to complete thermalization to the Rayleigh-Jeans equilibrium distribution, requiring prohibitive long interaction lengths.

  17. High flow nasal cannula (HFNC) versus nasal continuous positive airway pressure (nCPAP) for the initial respiratory management of acute viral bronchiolitis in young infants: a multicenter randomized controlled trial (TRAMONTANE study).

    PubMed

    Milési, Christophe; Essouri, Sandrine; Pouyau, Robin; Liet, Jean-Michel; Afanetti, Mickael; Portefaix, Aurélie; Baleine, Julien; Durand, Sabine; Combes, Clémentine; Douillard, Aymeric; Cambonie, Gilles

    2017-02-01

    Nasal continuous positive airway pressure (nCPAP) is currently the gold standard for respiratory support for moderate to severe acute viral bronchiolitis (AVB). Although oxygen delivery via high flow nasal cannula (HFNC) is increasingly used, evidence of its efficacy and safety is lacking in infants. A randomized controlled trial was performed in five pediatric intensive care units (PICUs) to compare 7 cmH 2 O nCPAP with 2 L/kg/min oxygen therapy administered with HFNC in infants up to 6 months old with moderate to severe AVB. The primary endpoint was the percentage of failure within 24 h of randomization using prespecified criteria. To satisfy noninferiority, the failure rate of HFNC had to lie within 15% of the failure rate of nCPAP. Secondary outcomes included success rate after crossover, intubation rate, length of stay, and serious adverse events. From November 2014 to March 2015, 142 infants were included and equally distributed into groups. The risk difference of -19% (95% CI -35 to -3%) did not allow the conclusion of HFNC noninferiority (p = 0.707). Superiority analysis suggested a relative risk of success 1.63 (95% CI 1.02-2.63) higher with nCPAP. The success rate with the alternative respiratory support, intubation rate, durations of noninvasive and invasive ventilation, skin lesions, and length of PICU stay were comparable between groups. No patient had air leak syndrome or died. In young infants with moderate to severe AVB, initial management with HFNC did not have a failure rate similar to that of nCPAP. This clinical trial was recorded in the National Library of Medicine registry (NCT 02457013).

  18. A school-based peer-led smoking prevention intervention with extracurricular activities: the LILT-LdP cluster randomized controlled trial design and study population.

    PubMed

    Bosi, Sandra; Gorini, Giuseppe; Tamelli, Marco; Monti, Claudia; Storani, Simone; Carreras, Giulia; Martini, Andrea; Allara, Elias; Angelini, Paola; Faggiano, Fabrizio

    2013-01-01

    Few school programs are effective in preventing adolescents' tobacco smoking initiation. The "Lega contro i Tumori - Luoghi di Prevenzione" is a cluster randomized controlled trial designed to evaluate a school-based peer-led smoking prevention intervention with extracurricular activities for students aged 14-15 years. This paper presents the study design and the baseline characteristics of the study population. Twenty secondary schools located in the Reggio Emilia province took part in the study. Five schools were excluded because they already participated in smoking prevention interventions. The schools were randomized to control or intervention arms. The study population consisted of students attending the first grade. Components of the intervention included 1) the out-of-school "Smoking Prevention Tour" (SPT) at the "Luoghi di Prevenzione" Center, a 4-hour (4 sessions) extracurricular activity; 2) the "Smoke-free Schools" intervention, combining a life-skills-based peer-led intervention at school, an in-depth lesson on one of the SPT sessions, and enforcement surveillance of the school antismoking policy. Tobacco use was studied through a questionnaire administered before and 6 months after the intervention. Eleven high schools and 9 vocational secondary schools took part in the study for a total of 2,476 out of 3,050 eligible students (81.2%). The proportions of respondents in high schools and vocational secondary schools were 90.9% and 64.5%, respectively (P <0.001). Intervention and control arms showed a different distribution of gender and school type, whereas no difference was observed in any tobacco-use characteristic. This study is one of the few Italian trials to evaluate the effectiveness of a school-based program for preventing smoking initiation.

  19. Implementation of a quantum random number generator based on the optimal clustering of photocounts

    NASA Astrophysics Data System (ADS)

    Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.

    2017-10-01

    To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.

  20. All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.

    PubMed

    Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S

    2015-07-01

    An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light.

  1. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  2. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  3. Can a new behaviorally oriented training process to improve lifting technique prevent occupationally related back injuries due to lifting?

    PubMed

    Lavender, Steven A; Lorenz, Eric P; Andersson, Gunnar B J

    2007-02-15

    A prospective randomized control trial. To determine the degree to which a new behavior-based lift training program (LiftTrainer; Ascension Technology, Burlington, VT) could reduce the incidence of low back disorder in distribution center jobs that require repetitive lifting. Most studies show programs aimed at training lifting techniques to be ineffective in preventing low back disorders, which may be due to their conceptual rather than behavioral learning approach. A total of 2144 employees in 19 distribution centers were randomized into either the LiftTrainer program or a video control group. In the LiftTrainer program, participants were individually trained in up to 5, 30-minute sessions while instrumented with motion capture sensors to quantify the L5/S1 moments. Twelve months following the initial training, injury data were obtained from company records. Survival analyses (Kaplan-Meier) indicated that there was no difference in injury rates between the 2 training groups. Likewise, there was no difference in the turnover rates. However, those with a low (<30 Nm) average twisting moment at the end of the first session experienced a significantly (P < 0.005) lower rate of low back disorder than controls. While overall the LiftTrainer program was not effective, those with twisting moments below 30 Nm reported fewer injuries, suggesting a shift in focus for "safe" lifting programs.

  4. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  5. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  6. Empirical scaling of the length of the longest increasing subsequences of random walks

    NASA Astrophysics Data System (ADS)

    Mendonça, J. Ricardo G.

    2017-02-01

    We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.

  7. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. An application of randomization for detecting evidence of thermoregulation in timber rattlesnakes (Crotalus horridus) from northwest Arkansas.

    PubMed

    Wills, C A; Beaupre, S J

    2000-01-01

    Most reptiles maintain their body temperatures within normal functional ranges through behavioral thermoregulation. Under some circumstances, thermoregulation may be a time-consuming activity, and thermoregulatory needs may impose significant constraints on the activities of ectotherms. A necessary (but not sufficient) condition for demonstrating thermoregulation is a difference between observed body temperature distributions and available operative temperature distributions. We examined operative and body temperature distributions of the timber rattlesnake (Crotalus horridus) for evidence of thermoregulation. Specifically, we compared the distribution of available operative temperatures in the environment to snake body temperatures during August and September. Operative temperatures were measured using 48 physical models that were randomly deployed in the environment and connected to a Campbell CR-21X data logger. Body temperatures (n=1,803) were recorded from 12 radiotagged snakes using temperature-sensitive telemetry. Separate randomization tests were conducted for each hour of day within each month. Actual body temperature distributions differed significantly from operative temperature distributions at most time points considered. Thus, C. horridus exhibits a necessary (but not sufficient) condition for demonstrating thermoregulation. However, unlike some desert ectotherms, we found no compelling evidence for thermal constraints on surface activity. Randomization may prove to be a powerful technique for drawing inferences about thermoregulation without reliance on studies of laboratory thermal preference.

  9. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  10. Experimental demonstration of an active phase randomization and monitor module for quantum key distribution

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Liang, Lin-Mei

    2012-08-01

    Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.

  11. The harmonic impact of electric vehicle battery charging

    NASA Astrophysics Data System (ADS)

    Staats, Preston Trent

    The potential widespread introduction of the electric vehicle (EV) presents both opportunities and challenges to the power systems engineers who will be required to supply power to EV batteries. One of the challenges associated with EV battery charging comes from the potentially high harmonic currents associated with the conversion of ac power system voltages to dc EV battery voltages. Harmonic currents lead to increased losses in distribution circuits and reduced life expectancy of such power distribution components as capacitors and transformers. Harmonic current injections also cause harmonic voltages on power distribution networks. These distorted voltages can affect power system loads and specific standards exist regulating acceptable voltage distortion. This dissertation develops and presents the theory required to evaluate the electric vehicle battery charger as a harmonic distorting load and its possible harmonic impact on various aspects of power distribution systems. The work begins by developing a method for evaluating the net harmonic current injection of a large collection of EV battery chargers which accounts for variation in the start-time and initial battery state-of-charge between individual chargers. Next, this method is analyzed to evaluate the effect of input parameter variation on the net harmonic currents predicted by the model. We then turn to an evaluation of the impact of EV charger harmonic currents on power distribution systems, first evaluating the impact of these currents on a substation transformer and then on power distribution system harmonic voltages. The method presented accounts for the uncertainty in EV harmonic current injections by modeling the start-time and initial battery state-of-charge (SOC) of an individual EV battery charger as random variables. Thus, the net harmonic current, and distribution system harmonic voltages are formulated in a stochastic framework. Results indicate that considering variation in start-time and SOC leads to reduced estimates of harmonic current injection when compared to more traditional methods that do not account for variation. Evaluation of power distribution system harmonic voltages suggests that for any power distribution network there is a definite threshold penetration of EVs, below which the total harmonic distortion of voltage exceeds 5% at an insignificant number of buses. Thus, most existing distribution systems will probably be able to accommodate the early introduction of EV battery charging without widespread harmonic voltage problems.

  12. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  13. Reducing financial avalanches by random investments

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk

    2013-12-01

    Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.

  14. Reducing financial avalanches by random investments.

    PubMed

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk

    2013-12-01

    Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.

  15. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  16. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment

    PubMed Central

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso

    2016-01-01

    ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547

  17. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2017-02-15

    Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.

  18. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  19. Testing a pollen-parent fecundity distribution model on seed-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...

  20. 29 CFR 1926.1413 - Wire rope-inspection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...

  1. 29 CFR 1926.1413 - Wire rope-inspection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...

  2. 29 CFR 1926.1413 - Wire rope-inspection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...

  3. Mapping of medical acronyms and initialisms to Medical Subject Headings (MeSH) across selected systems

    PubMed Central

    Shultz, Mary

    2006-01-01

    Introduction: Given the common use of acronyms and initialisms in the health sciences, searchers may be entering these abbreviated terms rather than full phrases when searching online systems. The purpose of this study is to evaluate how various MEDLINE Medical Subject Headings (MeSH) interfaces map acronyms and initialisms to the MeSH vocabulary. Methods: The interfaces used in this study were: the PubMed MeSH database, the PubMed Automatic Term Mapping feature, the NLM Gateway Term Finder, and Ovid MEDLINE. Acronyms and initialisms were randomly selected from 2 print sources. The test data set included 415 randomly selected acronyms and initialisms whose related meanings were found to be MeSH terms. Each acronym and initialism was entered into each MEDLINE MeSH interface to determine if it mapped to the corresponding MeSH term. Separately, 46 commonly used acronyms and initialisms were tested. Results: While performance differed widely, the success rates were low across all interfaces for the randomly selected terms. The common acronyms and initialisms tested at higher success rates across the interfaces, but the differences between the interfaces remained. Conclusion: Online interfaces do not always map medical acronyms and initialisms to their corresponding MeSH phrases. This may lead to inaccurate results and missed information if acronyms and initialisms are used in search strategies. PMID:17082832

  4. A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.

    PubMed

    Lione, G; Gonthier, P

    2016-01-01

    The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error < 0.05). A biological validation on the spatial distribution of spores of two fungal pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.

  5. Effects of Changing Body Weight Distribution on Mediolateral Stability Control during Gait Initiation

    PubMed Central

    Caderby, Teddy; Yiou, Eric; Peyrot, Nicolas; de Viviés, Xavier; Bonazzi, Bruno; Dalleau, Georges

    2017-01-01

    During gait initiation, anticipatory postural adjustments (APA) precede the execution of the first step. It is generally acknowledged that these APA contribute to forward progression but also serve to stabilize the whole body in the mediolateral direction during step execution. Although previous studies have shown that changes in the distribution of body weight between both legs influence motor performance during gait initiation, it is not known whether and how such changes affect a person’s postural stability during this task. The aim of this study was to investigate the effects of changing initial body weight distribution between legs on mediolateral postural stability during gait initiation. Changes in body weight distribution were induced under experimental conditions by modifying the frontal plane distribution of an external load located at the participants’ waists. Fifteen healthy adults performed a gait initiation series at a similar speed under three conditions: with the overload evenly distributed over both legs; with the overload strictly distributed over the swing-limb side; and with the overload strictly distributed over the stance-leg side. Our results showed that the mediolateral location of center-of-mass (CoM) during the initial upright posture differed between the experimental conditions, indicating modifications in the initial distribution of body weight between the legs according to the load distribution. While the parameters related to the forward progression remained unchanged, the alterations in body weight distribution elicited adaptive changes in the amplitude of APA in the mediolateral direction (i.e., maximal mediolateral shift of the center of pressure (CoP)), without variation in their duration. Specifically, it was observed that the amplitude of APA was modulated in such a way that mediolateral dynamic stability at swing foot-contact, quantified by the margin of stability (i.e., the distance between the base of support boundary and the extrapolated CoM position), did not vary between the conditions. These findings suggest that APA seem to be scaled as a function of the initial body weight distribution between both legs so as to maintain optimal conditions of stability during gait initiation. PMID:28396629

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    ALAM,TODD M.

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  7. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  8. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Multiwavelength generation in a random distributed feedback fiber laser using an all fiber Lyot filter.

    PubMed

    Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V

    2014-02-10

    Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.

  10. Simulation of MAD Cow Disease Propagation

    NASA Astrophysics Data System (ADS)

    Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.

    Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.

  11. Surface faceting and elemental diffusion behaviour at atomic scale for alloy nanoparticles during in situ annealing

    PubMed Central

    Chi, Miaofang; Wang, Chao; Lei, Yinkai; Wang, Guofeng; Li, Dongguo; More, Karren L.; Lupini, Andrew; Allard, Lawrence F.; Markovic, Nenad M.; Stamenkovic, Vojislav R.

    2015-01-01

    The catalytic performance of nanoparticles is primarily determined by the precise nature of the surface and near-surface atomic configurations, which can be tailored by post-synthesis annealing effectively and straightforwardly. Understanding the complete dynamic response of surface structure and chemistry to thermal treatments at the atomic scale is imperative for the rational design of catalyst nanoparticles. Here, by tracking the same individual Pt3Co nanoparticles during in situ annealing in a scanning transmission electron microscope, we directly discern five distinct stages of surface elemental rearrangements in Pt3Co nanoparticles at the atomic scale: initial random (alloy) elemental distribution; surface platinum-skin-layer formation; nucleation of structurally ordered domains; ordered framework development and, finally, initiation of amorphization. Furthermore, a comprehensive interplay among phase evolution, surface faceting and elemental inter-diffusion is revealed, and supported by atomistic simulations. This work may pave the way towards designing catalysts through post-synthesis annealing for optimized catalytic performance. PMID:26576477

  12. Surface faceting and elemental diffusion behaviour at atomic scale for alloy nanoparticles during in situ annealing

    DOE PAGES

    Chi, Miaofang; Wang, Chao; Lei, Yinkai; ...

    2015-11-18

    The catalytic performance of nanoparticles is primarily determined by the precise nature of the surface and near-surface atomic configurations, which can be tailored by post-synthesis annealing effectively and straightforwardly. Understanding the complete dynamic response of surface structure and chemistry to thermal treatments at the atomic scale is imperative for the rational design of catalyst nanoparticles. Here, by tracking the same individual Pt 3Co nanoparticles during in situ annealing in a scanning transmission electron microscope, we directly discern five distinct stages of surface elemental rearrangements in Pt 3Co nanoparticles at the atomic scale: initial random (alloy) elemental distribution; surface platinum-skin-layer formation;more » nucleation of structurally ordered domains; ordered framework development and, finally, initiation of amorphization. Furthermore, a comprehensive interplay among phase evolution, surface faceting and elemental inter-diffusion is revealed, and supported by atomistic simulations. In conlcusion, this work may pave the way towards designing catalysts through post-synthesis annealing for optimized catalytic performance.« less

  13. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  14. Phase-space perspective on the wavelength-dependent electron correlation of strong-field double ionization of Xe

    NASA Astrophysics Data System (ADS)

    Shao, Yun; Yuan, Zongqiang; Ye, Difa; Fu, Libin; Liu, Ming-Ming; Sun, Xufei; Wu, Chengyin; Liu, Jie; Gong, Qihuang; Liu, Yunquan

    2017-12-01

    We measure the wavelength-dependent correlated-electron momentum (CEM) spectra of strong-field double ionization of Xe atoms, and observe a significant change from a roughly nonstructured (uncorrelated) pattern at 795 nm to an elongated distribution with V-shaped structure (correlated) at higher wavelengths of 1320 and 1810 nm, pointing to the transition of the ionization dynamics imprinted in the momentum distributions. These observations are well reproduced by a semiclassical model using Green-Sellin-Zachor potential to take into account the screening effect. We show that the momentum distribution of Xe2+ undergoes a bifurcation structure emerging from single-hump to double-hump structure as the laser wavelength increases, which is dramatically different from that of He2+, indicating the complex multi-electron effect. By back analyzing the double ionization trajectories in the phase space (the initial transverse momentum and the laser phase at the tunneling exit) of the first tunneled electrons, we provide deep insight into the physical origin for electron correlation dynamics. We find that a random distribution in phase-space is responsible for a less distinct structured CEM spectrum at shorter wavelength. While increasing the laser wavelength, a topology-invariant pattern in phase-space appears, leading to the clearly visible V-shaped structures.

  15. Free-Space Quantum Key Distribution using Polarization Entangled Photons

    NASA Astrophysics Data System (ADS)

    Kurtsiefer, Christian

    2007-06-01

    We report on a complete experimental implementation of a quantum key distribution protocol through a free space link using polarization-entangled photon pairs from a compact parametric down-conversion source [1]. Based on a BB84-equivalent protocol, we generated without interruption over 10 hours a secret key free-space optical link distance of 1.5 km with a rate up to 950 bits per second after error correction and privacy amplification. Our system is based on two time stamp units and relies on no specific hardware channel for coincidence identification besides an IP link. For that, initial clock synchronization with an accuracy of better than 2 ns is achieved, based on a conventional NTP protocol and a tiered cross correlation of time tags on both sides. Time tags are used to servo a local clock, allowing a streamed measurement on correctly identified photon pairs. Contrary to the majority of quantum key distribution systems, this approach does not require a trusted large-bandwidth random number generator, but integrates that into the physical key generation process. We discuss our current progress of implementing a key distribution via an atmospherical link during daylight conditions, and possible attack scenarios on a physical timing information side channel to a entanglement-based key distribution system. [1] I. Marcikic, A. Lamas-Linares, C. Kurtsiefer, Appl. Phys. Lett. 89, 101122 (2006).

  16. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  17. Directed Random Markets: Connectivity Determines Money

    NASA Astrophysics Data System (ADS)

    Martínez-Martínez, Ismael; López-Ruiz, Ricardo

    2013-12-01

    Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.

  18. Dynamic stability of nano-fibers self-assembled from short amphiphilic A6D peptides

    NASA Astrophysics Data System (ADS)

    Nikoofard, Narges; Maghsoodi, Fahimeh

    2018-04-01

    Self-assembly of A6D amphiphilic peptides in explicit water is studied by using coarse-grained molecular dynamics simulations. It is observed that the self-assembly of randomly distributed A6D peptides leads to the formation of a network of nano-fibers. Two other simulations with cylindrical nano-fibers as the initial configuration show the dynamic stability of the self-assembled nano-fibers. As a striking feature, notable fluctuations occur along the axes of the nano-fibers. Depending on the number of peptides per unit length of the nano-fiber, flat-shaped bulges or spiral shapes along the nano-fiber axis are observed at the fluctuations. Analysis of the particle distribution around the nano-fiber indicates that the hydrophobic core and the hydrophilic shell of the nano-structure are preserved in both simulations. The size of the deformations and their correlation times are different in the two simulations. This study gives new insights into the dynamics of the self-assembled nano-structures of short amphiphilic peptides.

  19. Heavy metal content in tea soils and their distribution in different parts of tea plants, Camellia sinensis (L). O. Kuntze.

    PubMed

    Seenivasan, Subbiah; Anderson, Todd Alan; Muraleedharan, Narayanannair

    2016-07-01

    Soils contaminated with heavy metals may pose a threat to environment and human health if metals enter the food chain over and above threshold levels. In general, there is a lack of information on the presence of heavy metals in tea [Camellia sinensis (L). O. Kuntze] plants and the soils in which they are grown. Therefore, an attempt was made to establish a database on the important heavy metals: cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb). For an initial survey on heavy metals, soil samples were collected randomly from tea-growing areas of Tamil Nadu, Kerala, and Karnataka, India. Parallel studies were conducted in the greenhouse on uptake of Pb, Cd, and Ni from soils supplemented with these metals at different concentrations. Finally, metal distribution in the tea plants under field conditions was also documented to assess the accumulation potential and critical limit of uptake by plants.

  20. Order-restricted inference for means with missing values.

    PubMed

    Wang, Heng; Zhong, Ping-Shou

    2017-09-01

    Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.

  1. Monitoring the Wall Mechanics During Stent Deployment in a Vessel

    PubMed Central

    Steinert, Brian D.; Zhao, Shijia; Gu, Linxia

    2012-01-01

    Clinical trials have reported different restenosis rates for various stent designs1. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo study. PMID:22588353

  2. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  3. Dynamic stability of nano-fibers self-assembled from short amphiphilic A6D peptides.

    PubMed

    Nikoofard, Narges; Maghsoodi, Fahimeh

    2018-04-07

    Self-assembly of A 6 D amphiphilic peptides in explicit water is studied by using coarse-grained molecular dynamics simulations. It is observed that the self-assembly of randomly distributed A 6 D peptides leads to the formation of a network of nano-fibers. Two other simulations with cylindrical nano-fibers as the initial configuration show the dynamic stability of the self-assembled nano-fibers. As a striking feature, notable fluctuations occur along the axes of the nano-fibers. Depending on the number of peptides per unit length of the nano-fiber, flat-shaped bulges or spiral shapes along the nano-fiber axis are observed at the fluctuations. Analysis of the particle distribution around the nano-fiber indicates that the hydrophobic core and the hydrophilic shell of the nano-structure are preserved in both simulations. The size of the deformations and their correlation times are different in the two simulations. This study gives new insights into the dynamics of the self-assembled nano-structures of short amphiphilic peptides.

  4. Statics and Dynamics of Selfish Interactions in Distributed Service Systems

    PubMed Central

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall’Asta, Luca

    2015-01-01

    We study a class of games which models the competition among agents to access some service provided by distributed service units and which exhibits congestion and frustration phenomena when service units have limited capacity. We propose a technique, based on the cavity method of statistical physics, to characterize the full spectrum of Nash equilibria of the game. The analysis reveals a large variety of equilibria, with very different statistical properties. Natural selfish dynamics, such as best-response, usually tend to large-utility equilibria, even though those of smaller utility are exponentially more numerous. Interestingly, the latter actually can be reached by selecting the initial conditions of the best-response dynamics close to the saturation limit of the service unit capacities. We also study a more realistic stochastic variant of the game by means of a simple and effective approximation of the average over the random parameters, showing that the properties of the average-case Nash equilibria are qualitatively similar to the deterministic ones. PMID:26177449

  5. Comparison between an instructor-led course and training using a voice advisory manikin in initial cardiopulmonary resuscitation skill acquisition.

    PubMed

    Min, Mun Ki; Yeom, Seok Ran; Ryu, Ji Ho; Kim, Yong In; Park, Maeng Real; Han, Sang Kyoon; Lee, Seong Hwa; Park, Sung Wook; Park, Soon Chang

    2016-09-01

    We compared training using a voice advisory manikin (VAM) with an instructor-led (IL) course in terms of acquisition of initial cardiopulmonary resuscitation (CPR) skills, as defined by the 2010 resuscitation guidelines. This study was a randomized, controlled, blinded, parallel-group trial. We recruited 82 first-year emergency medical technician students and distributed them randomly into two groups: the IL group (n=41) and the VAM group (n=37). In the IL-group, participants were trained in "single-rescuer, adult CPR" according to the American Heart Association's Basic Life Support course for healthcare providers. In the VAM group, all subjects received a 20-minute lesson about CPR. After the lesson, each student trained individually with the VAM for 1 hour, receiving real-time feedback. After the training, all subjects were evaluated as they performed basic CPR (30 compressions, 2 ventilations) for 4 minutes. The proportion of participants with a mean compression depth ≥50 mm was 34.1% in the IL group and 27.0% in the VAM group, and the proportion with a mean compression depth ≥40 mm had increased significantly in both groups compared with ≥50 mm (IL group, 82.9%; VAM group, 86.5%). However, no significant differences were detected between the groups in this regard. The proportion of ventilations of the appropriate volume was relatively low in both groups (IL group, 26.4%; VAM group, 12.5%; P=0.396). Both methods, the IL training using a practice-while-watching video and the VAM training, facilitated initial CPR skill acquisition, especially in terms of correct chest compression.

  6. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  7. Models for the hotspot distribution

    NASA Technical Reports Server (NTRS)

    Jurdy, Donna M.; Stefanick, Michael

    1990-01-01

    Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.

  8. A network model of correlated growth of tissue stiffening in pulmonary fibrosis

    NASA Astrophysics Data System (ADS)

    Oliveira, Cláudio L. N.; Bates, Jason H. T.; Suki, Béla

    2014-06-01

    During the progression of pulmonary fibrosis, initially isolated regions of high stiffness form and grow in the lung tissue due to collagen deposition by fibroblast cells. We have previously shown that ongoing collagen deposition may not lead to significant increases in the bulk modulus of the lung until these local remodeled regions have become sufficiently numerous and extensive to percolate in a continuous path across the entire tissue (Bates et al 2007 Am. J. Respir. Crit. Care Med. 176 617). This model, however, did not include the possibility of spatially correlated deposition of collagen. In the present study, we investigate whether spatial correlations influence the bulk modulus in a two-dimensional elastic network model of lung tissue. Random collagen deposition at a single site is modeled by increasing the elastic constant of the spring at that site by a factor of 100. By contrast, correlated collagen deposition is represented by stiffening the springs encountered along a random walk starting from some initial spring, the rationale being that excess collagen deposition is more likely in the vicinity of an already stiff region. A combination of random and correlated deposition is modeled by performing random walks of length N from randomly selected initial sites, the balance between the two processes being determined by N. We found that the dependence of bulk modulus, B(N,c), on both N and the fraction of stiff springs, c, can be described by a strikingly simple set of empirical equations. For c<0.3, B(N,c) exhibits exponential growth from its initial value according to B(N,c)\\approx {{B}_{0}}exp (2c)\\left[ 1+{{c}^{\\beta }}ln \\left( {{N}^{{{a}_{I}}}} \\right) \\right], where \\beta =0.994+/- 0.024 and {{a}_{I}}=0.54+/- 0.026. For intermediate concentrations of stiffening, 0.3\\leqslant c\\leqslant 0.8, another exponential rule describes the bulk modulus as B(N,c)=4{{B}_{0}}exp \\left[ {{a}_{II}}\\left( c-{{c}_{c}} \\right) \\right], where {{a}_{II}} and {{c}_{c}} are parameters that depend on N. For c>0.8, B(N,c) is linear in c and independent of N, such that B(N,c)=100\\;{{B}_{0}}-100{{a}_{III}}(1-c){{B}_{0}}, where {{a}_{III}}=2.857. For small concentrations, the physiologically most relevant regime, the forces in the network springs are distributed according to a power law. When c = 0.3, the exponent of this power law increases from -4.5, when N = 1, and saturates to about -2, as N increases above 40. These results suggest that the spatial correlation of collagen deposition in the fibrotic lung has a strong effect on the rate of lung function decline and on the mechanical environment in which the cells responsible for remodeling find themselves.

  9. Preparing Beginning Reading Teachers: An Experimental Comparison of Initial Early Literacy Field Experiences

    ERIC Educational Resources Information Center

    Al Otaiba, Stephanie; Lake, Vickie E.; Greulich, Luana; Folsom, Jessica S.; Guidry, Lisa

    2012-01-01

    This randomized-control trial examined the learning of preservice teachers taking an initial Early Literacy course in an early childhood education program and of the kindergarten or first grade students they tutored in their field experience. Preservice teachers were randomly assigned to one of two tutoring programs: Book Buddies and Tutor…

  10. Fidelity decay of the two-level bosonic embedded ensembles of random matrices

    NASA Astrophysics Data System (ADS)

    Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.

    2010-12-01

    We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.

  11. Drop Spreading with Random Viscosity

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Jensen, Oliver

    2016-11-01

    Airway mucus acts as a barrier to protect the lung. However as a biological material, its physical properties are known imperfectly and can be spatially heterogeneous. In this study we assess the impact of these uncertainties on the rate of spreading of a drop (representing an inhaled aerosol) over a mucus film. We model the film as Newtonian, having a viscosity that depends linearly on the concentration of a passive solute (a crude proxy for mucin proteins). Given an initial random solute (and hence viscosity) distribution, described as a Gaussian random field with a given correlation structure, we seek to quantify the uncertainties in outcomes as the drop spreads. Using lubrication theory, we describe the spreading of the drop in terms of a system of coupled nonlinear PDEs governing the evolution of film height and the vertically-averaged solute concentration. We perform Monte Carlo simulations to predict the variability in the drop centre location and width (1D) or area (2D). We show how simulation results are well described (at much lower computational cost) by a low-order model using a weak disorder expansion. Our results show for example how variability in the drop location is a non-monotonic function of the solute correlation length increases. Engineering and Physical Sciences Research Council.

  12. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  13. Effect of packing method on the randomness of disc packings

    NASA Astrophysics Data System (ADS)

    Zhang, Z. P.; Yu, A. B.; Oakeshott, R. B. S.

    1996-06-01

    The randomness of disc packings, generated by random sequential adsorption (RSA), random packing under gravity (RPG) and Mason packing (MP) which gives a packing density close to that of the RSA packing, has been analysed, based on the Delaunay tessellation, and is evaluated at two levels, i.e. the randomness at individual subunit level which relates to the construction of a triangle from a given edge length distribution and the randomness at network level which relates to the connection between triangles from a given triangle frequency distribution. The Delaunay tessellation itself is also analysed and its almost perfect randomness at the two levels is demonstrated, which verifies the proposed approach and provides a random reference system for the present analysis. It is found that (i) the construction of a triangle subunit is not random for the RSA, MP and RPG packings, with the degree of randomness decreasing from the RSA to MP and then to RPG packing; (ii) the connection of triangular subunits in the network is almost perfectly random for the RSA packing, acceptable for the MP packing and not good for the RPG packing. Packing method is an important factor governing the randomness of disc packings.

  14. Nature of alpha and beta particles in glycogen using molecular size distributions.

    PubMed

    Sullivan, Mitchell A; Vilaplana, Francisco; Cave, Richard A; Stapleton, David; Gray-Weale, Angus A; Gilbert, Robert G

    2010-04-12

    Glycogen is a randomly hyperbranched glucose polymer. Complex branched polymers have two structural levels: individual branches and the way these branches are linked. Liver glycogen has a third level: supramolecular clusters of beta particles which form larger clusters of alpha particles. Size distributions of native glycogen were characterized using size exclusion chromatography (SEC) to find the number and weight distributions and the size dependences of the number- and weight-average masses. These were fitted to two distinct randomly joined reference structures, constructed by random attachment of individual branches and as random aggregates of beta particles. The z-average size of the alpha particles in dimethylsulfoxide does not change significantly with high concentrations of LiBr, a solvent system that would disrupt hydrogen bonding. These data reveal that the beta particles are covalently bonded to form alpha particles through a hitherto unsuspected enzyme process, operative in the liver on particles above a certain size range.

  15. Noise-Induced Synchronization among Sub-RF CMOS Analog Oscillators for Skew-Free Clock Distribution

    NASA Astrophysics Data System (ADS)

    Utagawa, Akira; Asai, Tetsuya; Hirose, Tetsuya; Amemiya, Yoshihito

    We present on-chip oscillator arrays synchronized by random noises, aiming at skew-free clock distribution on synchronous digital systems. Nakao et al. recently reported that independent neural oscillators can be synchronized by applying temporal random impulses to the oscillators [1], [2]. We regard neural oscillators as independent clock sources on LSIs; i. e., clock sources are distributed on LSIs, and they are forced to synchronize through the use of random noises. We designed neuron-based clock generators operating at sub-RF region (<1GHz) by modifying the original neuron model to a new model that is suitable for CMOS implementation with 0.25-μm CMOS parameters. Through circuit simulations, we demonstrate that i) the clock generators are certainly synchronized by pseudo-random noises and ii) clock generators exhibited phase-locked oscillations even if they had small device mismatches.

  16. Generation of pseudo-random numbers

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  17. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  18. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  19. Multi-peak structure of generation spectrum of random distributed feedback fiber Raman lasers.

    PubMed

    Vatnik, I D; Zlobina, E A; Kablukov, S I; Babin, S A

    2017-02-06

    We study spectral features of the generation of random distributed feedback fiber Raman laser arising from two-peak shape of the Raman gain spectral profile realized in the germanosilicate fibers. We demonstrate that number of peaks can be calculated using power balance model considering different subcomponents within each Stokes component.

  20. Robustness of optimal random searches in fragmented environments

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Santos, M. C.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-05-01

    The random search problem is a challenging and interdisciplinary topic of research in statistical physics. Realistic searches usually take place in nonuniform heterogeneous distributions of targets, e.g., patchy environments and fragmented habitats in ecological systems. Here we present a comprehensive numerical study of search efficiency in arbitrarily fragmented landscapes with unlimited visits to targets that can only be found within patches. We assume a random walker selecting uniformly distributed turning angles and step lengths from an inverse power-law tailed distribution with exponent μ . Our main finding is that for a large class of fragmented environments the optimal strategy corresponds approximately to the same value μopt≈2 . Moreover, this exponent is indistinguishable from the well-known exact optimal value μopt=2 for the low-density limit of homogeneously distributed revisitable targets. Surprisingly, the best search strategies do not depend (or depend only weakly) on the specific details of the fragmentation. Finally, we discuss the mechanisms behind this observed robustness and comment on the relevance of our results to both the random search theory in general, as well as specifically to the foraging problem in the biological context.

  1. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  2. Discussion about initial runoff and volume capture ratio of annual rainfall.

    PubMed

    Zhang, Kun; Che, Wu; Zhang, Wei; Zhao, Yang

    2016-10-01

    In recent years, runoff pollution from urban areas has become a major concern all over the world. But there exists a worldwide confusion about how much stormwater should be captured for the purpose of runoff pollution control. Furthermore, the construction cost and pollution control efficiency are closely linked with the size of stormwater facilities, which is then related to the first flush (FF) phenomenon and volume capture ratio of annual rainfall (VCRa). Based on this background, analysis of the random and changeable characteristics of the occurrence of FF was carried out first, which was proved to vary with catchment characteristics and pollutant types. Secondly, the distribution of design rainfall depth toward 85% VCRa in China and its causes have been analyzed. Thirdly, the relationship between initial runoff and VCRa has been studied at both conceptual and numerical levels, and the change rule of VCRa along with design rainfall depth in different regions has been studied. The limitation of initial runoff has been illustrated from the perspective of runoff characteristics of single rainfall events in the first part, and from the perspective of regional differences in the two subsequent parts.

  3. Cascaded Raman lasing in a PM phosphosilicate fiber with random distributed feedback

    NASA Astrophysics Data System (ADS)

    Lobach, Ivan A.; Kablukov, Sergey I.; Babin, Sergey A.

    2018-02-01

    We report on the first demonstration of a linearly polarized cascaded Raman fiber laser based on a simple half-open cavity with a broadband composite reflector and random distributed feedback in a polarization maintaining phosphosilicate fiber operating beyond zero dispersion wavelength ( 1400 nm). With increasing pump power from a Yb-doped fiber laser at 1080 nm, the random laser generates subsequently 8 W at 1262 nm and 9 W at 1515 nm with polarization extinction ratio of 27 dB. The generation linewidths amount to about 1 nm and 3 nm, respectively, being almost independent of power, in correspondence with the theory of a cascaded random lasing.

  4. Effects of vibration and shock on the performance of gas-bearing space-power Brayton cycle turbomachinery. Part 3: Sinusoidal and random vibration data reduction and evaluation, and random vibration probability analysis

    NASA Technical Reports Server (NTRS)

    Tessarzik, J. M.; Chiang, T.; Badgley, R. H.

    1973-01-01

    The random vibration response of a gas bearing rotor support system has been experimentally and analytically investigated in the amplitude and frequency domains. The NASA Brayton Rotating Unit (BRU), a 36,000 rpm, 10 KWe turbogenerator had previously been subjected in the laboratory to external random vibrations, and the response data recorded on magnetic tape. This data has now been experimentally analyzed for amplitude distribution and magnetic tape. This data has now been experimentally analyzed for amplitude distribution and frequency content. The results of the power spectral density analysis indicate strong vibration responses for the major rotor-bearing system components at frequencies which correspond closely to their resonant frequencies obtained under periodic vibration testing. The results of amplitude analysis indicate an increasing shift towards non-Gaussian distributions as the input level of external vibrations is raised. Analysis of axial random vibration response of the BRU was performed by using a linear three-mass model. Power spectral densities, the root-mean-square value of the thrust bearing surface contact were calculated for specified input random excitation.

  5. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  6. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  7. Radial Alignment of Ellipitcal Galaxies by the Tidal Force of a Cluster of Galaxies

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang-Nan; Rong, Yu; Tu, Hong

    2015-08-01

    Unlike the random radial orientation distribution of field elliptical galaxies, galaxies in a cluster of galaxies are expected to point preferentially toward the center of the cluster, as a result of the cluster's tidal force on its member galaxies. In this work an analytic model is formulated to simulate this effect. The deformation time scale of a galaxy in a cluster is usually much shorter than the time scale of change of the tidal force; the dynamical process of the tidal interaction within the galaxy can thus be ignored. An equilibrium shape of a galaxy is then assumed to be the surface of equipotential, which is the sum of the self-gravitational potential of the galaxy and the tidal potential of the cluster at this location. We use a Monte-Carlo method to calculate the radial orientation distribution of these galaxies, by assuming the NFW mass profile of the cluster and the initial ellipticity of field galaxies. The radial angles show a single peak distribution centered at zero. The Monte-Carlo simulations also show that a shift of the reference center from the real cluster center weakens the anisotropy of the radial angle distribution. Therefore, the expected radial alignment cannot be revealed if the distribution of spatial position angle is used instead of that of radial angle. The observed radial orientations of elliptical galaxies in cluster Abell~2744 are consistent with the simulated distribution.

  8. Radial Alignment of Elliptical Galaxies by the Tidal Force of a Cluster of Galaxies

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang-Nan; Rong, Yu; Tu, Hong

    2015-08-01

    Unlike the random radial orientation distribution of field elliptical galaxies, galaxies in a cluster of galaxies are expected to point preferentially toward the center of the cluster, as a result of the cluster's tidal force on its member galaxies. In this work an analytic model is formulated to simulate this effect. The deformation time scale of a galaxy in a cluster is usually much shorter than the time scale of change of the tidal force; the dynamical process of the tidal interaction within the galaxy can thus be ignored. An equilibrium shape of a galaxy is then assumed to be the surface of equipotential, which is the sum of the self-gravitational potential of the galaxy and the tidal potential of the cluster at this location. We use a Monte-Carlo method to calculate the radial orientation distribution of these galaxies, by assuming the NFW mass profile of the cluster and the initial ellipticity of field galaxies. The radial angles show a single peak distribution centered at zero. The Monte-Carlo simulations also show that a shift of the reference center from the real cluster center weakens the anisotropy of the radial angle distribution. Therefore, the expected radial alignment cannot be revealed if the distribution of spatial position angle is used instead of that of radial angle. The observed radial orientations of elliptical galaxies in cluster Abell~2744 are consistent with the simulated distribution.

  9. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  10. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  11. Imbalance p values for baseline covariates in randomized controlled trials: a last resort for the use of p values? A pro and contra debate.

    PubMed

    Stang, Andreas; Baethge, Christopher

    2018-01-01

    Results of randomized controlled trials (RCTs) are usually accompanied by a table that compares covariates between the study groups at baseline. Sometimes, the investigators report p values for imbalanced covariates. The aim of this debate is to illustrate the pro and contra of the use of these p values in RCTs. Low p values can be a sign of biased or fraudulent randomization and can be used as a warning sign. They can be considered as a screening tool with low positive-predictive value. Low p values should prompt us to ask for the reasons and for potential consequences, especially in combination with hints of methodological problems. A fair randomization produces the expectation that the distribution of p values follows a flat distribution. It does not produce an expectation related to a single p value. The distribution of p values in RCTs can be influenced by the correlation among covariates, differential misclassification or differential mismeasurement of baseline covariates. Given only a small number of reported p values in the reports of RCTs, judging whether the realized p value distribution is, indeed, a flat distribution becomes difficult. If p values ≤0.005 or ≥0.995 were used as a sign of alarm, the false-positive rate would be 5.0% if randomization was done correctly, and five p values per RCT were reported. Use of a low p value as a warning sign that randomization is potentially biased can be considered a vague heuristic. The authors of this debate are obviously more or less enthusiastic with this heuristic and differ in the consequences they propose.

  12. Distribution law of the Dirac eigenmodes in QCD

    NASA Astrophysics Data System (ADS)

    Catillo, Marco; Glozman, Leonid Ya.

    2018-04-01

    The near-zero modes of the Dirac operator are connected to spontaneous breaking of chiral symmetry in QCD (SBCS) via the Banks-Casher relation. At the same time, the distribution of the near-zero modes is well described by the Random Matrix Theory (RMT) with the Gaussian Unitary Ensemble (GUE). Then, it has become a standard lore that a randomness, as observed through distributions of the near-zero modes of the Dirac operator, is a consequence of SBCS. The higher-lying modes of the Dirac operator are not affected by SBCS and are sensitive to confinement physics and related SU(2)CS and SU(2NF) symmetries. We study the distribution of the near-zero and higher-lying eigenmodes of the overlap Dirac operator within NF = 2 dynamical simulations. We find that both the distributions of the near-zero and higher-lying modes are perfectly described by GUE of RMT. This means that randomness, while consistent with SBCS, is not a consequence of SBCS and is linked to the confining chromo-electric field.

  13. Non-random distribution and co-localization of purine/pyrimidine-encoded information and transcriptional regulatory domains.

    PubMed

    Povinelli, C M

    1992-01-01

    In order to detect sequence-based information predictive for the location of eukaryotic transcriptional regulatory domains, the frequencies and distributions of the 36 possible purine/pyrimidine reverse complement hexamer pairs was determined for test sets of real and random sequences. The distribution of one of the hexamer pairs (RRYYRR/YYRRYY, referred to as M1) was further examined in a larger set of sequences (> 32 genes, 230 kb). Predominant clusters of M1 and the locations of eukaryotic transcriptional regulatory domains were found to be associated and non-randomly distributed along the DNA consistent with a periodicity of approximately 1.2 kb. In the context of higher ordered chromatin this would align promoters, enhancers and the predominant clusters of M1 longitudinally along one face of a 30 nm fiber. Using only information about the distribution of the M1 motif, 50-70% of a sequence could be eliminated as being unlikely to contain transcriptional regulatory domains with an 87% recovery of the regulatory domains present.

  14. Two approximations of the present value distribution of a disability annuity

    NASA Astrophysics Data System (ADS)

    Spreeuw, Jaap

    2006-02-01

    The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].

  15. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  16. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  17. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  18. On the importance of an accurate representation of the initial state of the system in classical dynamics simulations

    NASA Astrophysics Data System (ADS)

    García-Vela, A.

    2000-05-01

    A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.

  19. Bayesian ionospheric multi-instrument 3D tomography

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Vierinen, Juha; Roininen, Lassi

    2017-04-01

    The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.

  20. Mineral requirements for growth and maintenance of F1 Boer × Saanen male kids.

    PubMed

    Teixeira, I A M A; Härter, C J; Pereira Filho, J M; Sobrinho, A G da Silva; Resende, K T

    2015-05-01

    The objective of this study was to determine the net requirements of minerals for the growth and maintenance of intact male F1 Boer × Saanen goat kids in the initial phase of growth. The following 2 experiments were performed: Exp. 1 was performed to determine the net growth requirements for Ca, P, Mg, Na, and K by F1 Boer × Saanen goat kids from 5 to 25 kg of BW and Exp. 2 was performed to determine the maintenance requirements of F1 Boer × Saanen goats from 15 to 25 kg BW. In Exp. 1, 32 intact male goat kids were distributed in a completely randomized design and mineral body composition was fit to an allometric equation in the form of a nonlinear model. To determine the mineral requirements for maintenance in Exp. 2, 21 intact male goat kids were distributed in a randomized block design, where the goat kids were subjected to 3 levels of feed restriction (0, 30, and 60% feed restriction). At the onset of Exp. 2, 7 goat kids were harvested and used to estimate the initial body composition (15 kg BW). Initial body composition was used to calculate the retention of minerals. The maintenance requirements were estimated by regressions obtained from the retention of minerals in the empty body and the intake of the mineral. The concentration of Ca, P, Na, and K in the empty BW decreased by 11, 13, 26, and 23% with the increase in BW from 5 to 25 kg (P < 0.01). As a consequence, our results showed that net requirements of Ca, P, Mg, Na, and K for weight gain decreased by 27.5, 27.8, 4.25, 43.2, and 39.7%, respectively, with the increase in BW from 5 to 25 kg (P < 0.01). The net requirements (g/kg of ADG) decreased from 9.7 to 7.0 for Ca, 6.5 to 4.7 for P, 0.38 to 0.36 for Mg, 0.88 to 0.50 for Na, and 1.9 to 1.2 for K when BW increased from 5 to 25 kg. The daily net requirements for maintenance per kilogram of BW were 38 mg of Ca, 42 mg of P, 1.6 mg of Mg, 5.0 mg of Na, and 19 mg of K. These results for the nutritional requirements of minerals may help to formulate more balanced diets for F1 Boer × Saanen goat kids in the initial growth phase.

  1. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  2. Analysis of random drop for gateway congestion control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hashem, Emam Salaheddin

    1989-01-01

    Lately, the growing demand on the Internet has prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the individual users and a degradation of overall network performance. Network simulation was used to illustrate the character of Internet congestion and its causes. A newly proposed gateway congestion control policy, called Random Drop, was considered as a promising solution to the pressing problem. Random Drop relieves resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped. The random choice should result in a drop distribution proportional to the bandwidth distribution among all contending TCP connections, thus applying the necessary fairness. Nonetheless, the simulation experiments demonstrate several shortcomings with this policy. Because Random Drop is a congestion control policy, which is not applied until congestion has already occurred, it usually results in a high drop rate that hurts too many connections including well-behaved ones. Even though the number of packets dropped is different from one connection to another depending on the buffer utilization upon overflow, the TCP recovery overhead is high enough to neutralize these differences, causing unfair congestion penalties. Besides, the drop distribution itself is an inaccurate representation of the average bandwidth distribution, missing much important information about the bandwidth utilization between buffer overflow events. A modification of Random Drop to do congestion avoidance by applying the policy early was also proposed. Early Random Drop has the advantage of avoiding the high drop rate of buffer overflow. The early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. To be used effectively, algorithms for the dynamic adjustment of the parameters of Early Random Drop to suite the current network load must still be developed.

  3. Structures and mechanical behaviors of Zr55Cu35Al10 bulk amorphous alloys at ambient and cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Fan, Cang; Liaw, P. K.; Haas, V.; Wall, J. J.; Choo, H.; Inoue, A.; Liu, C. T.

    2006-07-01

    Based on a systematic study of pair distribution functions, carried out at cryogenic and ambient temperatures, on as-cast and crystallized ternary Zr-based bulk amorphous alloys (BAAs), we found that the atoms in BAAs are inhomogenously distributed at a local atomic level. They exist as different clusters with significantly shorter bond lengths than their crystallized counterpart structures—intermetallic compounds, and these structures exist stably in the amorphous state. This results in additional free volume, which is about ˜7% larger than that measured by the Archimedes method. The compressive strength measured at ˜77K was found to be ˜16% larger than that measured at 298K . In this study, an amorphous structural model is proposed, in which strongly bonded clusters acting as units are randomly distributed and strongly correlated to one another, as the free volume forms between clusters. Simulations with reverse Monte Carlo were performed by combining icosehadral and cubic structures as the initial structures for the BAA. The simulations show results consistent with our model. An attempt has been made to connect the relationship between amorphous structures and their mechanical properties.

  4. Classification of kidney and liver tissue using ultrasound backscatter data

    NASA Astrophysics Data System (ADS)

    Aalamifar, Fereshteh; Rivaz, Hassan; Cerrolaza, Juan J.; Jago, James; Safdar, Nabile; Boctor, Emad M.; Linguraru, Marius G.

    2015-03-01

    Ultrasound (US) tissue characterization provides valuable information for the initialization of automatic segmentation algorithms, and can further provide complementary information for diagnosis of pathologies. US tissue characterization is challenging due to the presence of various types of image artifacts and dependence on the sonographer's skills. One way of overcoming this challenge is by characterizing images based on the distribution of the backscatter data derived from the interaction between US waves and tissue. The goal of this work is to classify liver versus kidney tissue in 3D volumetric US data using the distribution of backscatter US data recovered from end-user displayed Bmode image available in clinical systems. To this end, we first propose the computation of a large set of features based on the homodyned-K distribution of the speckle as well as the correlation coefficients between small patches in 3D images. We then utilize the random forests framework to select the most important features for classification. Experiments on in-vivo 3D US data from nine pediatric patients with hydronephrosis showed an average accuracy of 94% for the classification of liver and kidney tissues showing a good potential of this work to assist in the classification and segmentation of abdominal soft tissue.

  5. Online distribution channel increases article usage on Mendeley: a randomized controlled trial.

    PubMed

    Kudlow, Paul; Cockerill, Matthew; Toccalino, Danielle; Dziadyk, Devin Bissky; Rutledge, Alan; Shachak, Aviv; McIntyre, Roger S; Ravindran, Arun; Eysenbach, Gunther

    2017-01-01

    Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm ( n  = 200) or the control arm ( n  = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant ( p  < 0.01). There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley (Spearman's rho r  = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.

  6. Universal energy distribution for interfaces in a random-field environment

    NASA Astrophysics Data System (ADS)

    Fedorenko, Andrei A.; Stepanow, Semjon

    2003-11-01

    We study the energy distribution function ρ(E) for interfaces in a random-field environment at zero temperature by summing the leading terms in the perturbation expansion of ρ(E) in powers of the disorder strength, and by taking into account the nonperturbational effects of the disorder using the functional renormalization group. We have found that the average and the variance of the energy for one-dimensional interface of length L behave as, R∝L ln L, ΔER∝L, while the distribution function of the energy tends for large L to the Gumbel distribution of the extreme value statistics.

  7. Evaluation of a Randomized Intervention to Delay Sexual Initiation among Fifth-Graders Followed through the Sixth Grade

    ERIC Educational Resources Information Center

    Koo, Helen P.; Rose, Allison; El-Khorazaty, M. Nabil; Yao, Qing; Jenkins, Renee R.; Anderson, Karen M.; Davis, Maurice; Walker, Leslie R.

    2011-01-01

    US adolescents initiate sex at increasingly younger ages, yet few pregnancy prevention interventions for children as young as 10-12 years old have been evaluated. Sixteen Washington, DC schools were randomly assigned to intervention versus control conditions. Beginning in 2001/02 with fifth-grade students and continuing during the sixth grade,…

  8. Weighted Scaling in Non-growth Random Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li

    2012-09-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  9. General Exact Solution to the Problem of the Probability Density for Sums of Random Variables

    NASA Astrophysics Data System (ADS)

    Tribelsky, Michael I.

    2002-07-01

    The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  10. General exact solution to the problem of the probability density for sums of random variables.

    PubMed

    Tribelsky, Michael I

    2002-08-12

    The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  11. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  12. Attractive interaction between Mn atoms on the GaAs(110) surface observed by scanning tunneling microscopy.

    PubMed

    Taninaka, Atsushi; Yoshida, Shoji; Kanazawa, Ken; Hayaki, Eiko; Takeuchi, Osamu; Shigekawa, Hidemi

    2016-06-16

    Scanning tunneling microscopy/spectroscopy (STM/STS) was carried out to investigate the structures of Mn atoms deposited on a GaAs(110) surface at room temperature to directly observe the characteristics of interactions between Mn atoms in GaAs. Mn atoms were paired with a probability higher than the random distribution, indicating an attractive interaction between them. In fact, re-pairing of unpaired Mn atoms was observed during STS measurement. The pair initially had a new structure, which was transformed during STS measurement into one of those formed by atom manipulation at 4 K. Mn atoms in pairs and trimers were aligned in the <110> direction, which is theoretically predicted to produce a high Curie temperature.

  13. Comparison of algorithms for the detection of cancer-drivers at sub-gene resolution

    PubMed Central

    Porta-Pardo, Eduard; Kamburov, Atanas; Tamborero, David; Pons, Tirso; Grases, Daniela; Valencia, Alfonso; Lopez-Bigas, Nuria; Getz, Gad; Godzik, Adam

    2018-01-01

    Understanding genetic events that lead to cancer initiation and progression remains one of the biggest challenges in cancer biology. Traditionally most algorithms for cancer driver identification look for genes that have more mutations than expected from the average background mutation rate. However, there is now a wide variety of methods that look for non-random distribution of mutations within proteins as a signal they have a driving role in cancer. Here we classify and review the progress of such sub-gene resolution algorithms, compare their findings on four distinct cancer datasets from The Cancer Genome Atlas and discuss how predictions from these algorithms can be interpreted in the emerging paradigms that challenge the simple dichotomy between driver and passenger genes. PMID:28714987

  14. Lindeberg theorem for Gibbs-Markov dynamics

    NASA Astrophysics Data System (ADS)

    Denker, Manfred; Senti, Samuel; Zhang, Xuan

    2017-12-01

    A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.

  15. Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market

    NASA Astrophysics Data System (ADS)

    Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako

    Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.

  16. Computational study on UV curing characteristics in nanoimprint lithography: Stochastic simulation

    NASA Astrophysics Data System (ADS)

    Koyama, Masanori; Shirai, Masamitsu; Kawata, Hiroaki; Hirai, Yoshihiko; Yasuda, Masaaki

    2017-06-01

    A computational simulation model of UV curing in nanoimprint lithography based on a simplified stochastic approach is proposed. The activated unit reacts with a randomly selected monomer within a critical reaction radius. Cluster units are chained to each other. Then, another monomer is activated and the next chain reaction occurs. This process is repeated until a virgin monomer disappears within the reaction radius or until the activated monomers react with each other. The simulation model well describes the basic UV curing characteristics, such as the molecular weight distributions of the reacted monomers and the effect of the initiator concentration on the conversion ratio. The effects of film thickness on UV curing characteristics are also studied by the simulation.

  17. Quantum dynamics of a particle with a spin-dependent velocity

    NASA Astrophysics Data System (ADS)

    Aslangul, Claude

    2005-01-01

    We study the dynamics of a particle in continuous time and space, the displacement of which is governed by an internal degree of freedom (spin). In one definite limit, the so-called quantum random walk is recovered but, although quite simple, the model possesses a rich variety of dynamics and goes far beyond this problem. Generally speaking, our framework can describe the motion of an electron in a magnetic sea near the Fermi level when linearization of the dispersion law is possible, coupled to a transverse magnetic field. Quite unexpected behaviours are obtained. In particular, we find that when the initial wave packet is fully localized in space, the Jz angular momentum component is frozen; this is an interesting example of an observable which, although it is not a constant of motion, has a constant expectation value. For a non-completely localized wave packet, the effect still occurs although less pronounced, and the spin keeps for ever memory of its initial state. Generally speaking, as time goes on, the spatial density profile looks rather complex, as a consequence of the competition between drift and precession, and displays various shapes according to the ratio between the Larmor period and the characteristic time of flight. The density profile gradually changes from a multimodal quickly moving distribution when the scattering rate is small, to a unimodal standing but flattening distribution in the opposite case.

  18. Airway reopening through catastrophic events in a hierarchical network

    PubMed Central

    Baudoin, Michael; Song, Yu; Manneville, Paul; Baroud, Charles N.

    2013-01-01

    When you reach with your straw for the final drops of a milkshake, the liquid forms a train of plugs that flow slowly initially because of the high viscosity. They then suddenly rupture and are replaced with a rapid airflow with the characteristic slurping sound. Trains of liquid plugs also are observed in complex geometries, such as porous media during petroleum extraction, in microfluidic two-phase flows, or in flows in the pulmonary airway tree under pathological conditions. The dynamics of rupture events in these geometries play the dominant role in the spatial distribution of the flow and in determining how much of the medium remains occluded. Here we show that the flow of a train of plugs in a straight channel is always unstable to breaking through a cascade of ruptures. Collective effects considerably modify the rupture dynamics of plug trains: Interactions among nearest neighbors take place through the wetting films and slow down the cascade, whereas global interactions, through the total resistance to flow of the train, accelerate the dynamics after each plug rupture. In a branching tree of microchannels, similar cascades occur along paths that connect the input to a particular output. This divides the initial tree into several independent subnetworks, which then evolve independently of one another. The spatiotemporal distribution of the cascades is random, owing to strong sensitivity to the plug divisions at the bifurcations. PMID:23277557

  19. Research amongst Physical Therapists in the State of Kuwait: Participation, Perception, Attitude and Barriers

    PubMed Central

    Aljadi, Sameera H.; Alrowayeh, Hesham N.; Alotaibi, Naser M.; Taaqi, Maqdad M.; Alquraini, Habib; Alshatti, Talal A.

    2013-01-01

    Objectives The objectives of this descriptive study were to investigate the attitudes and perceptions of physical therapists regarding research, the intention to engage in research and the barriers to participating in research amongst physical therapists in the State of Kuwait. Subjects and Methods A previously validated questionnaire was distributed to 200 non-randomly selected physical therapists. The questionnaire gathered demographic data as well as information regarding research-related activities. Descriptive statistics, frequency and χ2 analyses were used in this study. Results Of the 200 questionnaires distributed to physical therapists 122 (61%) were completed and returned. The physical therapists had a positive attitude towards reading these findings in order to update their knowledge. However, only 16 (17%) of the physical therapists participated in clinical research. The common reasons given were: minimal role and reduced ability, intention and level of engagement in initiating research, probably due to work overload, time constraints and limited access to resources. Conclusions Physical therapists in Kuwait had a positive attitude towards the application of research findings to their practice. However, they were not confident in initiating research due to work overload and lack of time as well as limited access to library resources. Therefore, we recommend stimulation to engage in research activities to be a requirement and to develop a system to improve the skills and knowledge of doing research. PMID:23988758

  20. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  1. Comparative analysis of ferroelectric domain statistics via nonlinear diffraction in random nonlinear materials.

    PubMed

    Wang, B; Switowski, K; Cojocaru, C; Roppo, V; Sheng, Y; Scalora, M; Kisielewski, J; Pawlak, D; Vilaseca, R; Akhouayri, H; Krolikowski, W; Trull, J

    2018-01-22

    We present an indirect, non-destructive optical method for domain statistic characterization in disordered nonlinear crystals having homogeneous refractive index and spatially random distribution of ferroelectric domains. This method relies on the analysis of the wave-dependent spatial distribution of the second harmonic, in the plane perpendicular to the optical axis in combination with numerical simulations. We apply this technique to the characterization of two different media, Calcium Barium Niobate and Strontium Barium Niobate, with drastically different statistical distributions of ferroelectric domains.

  2. Deviations from Rayleigh statistics in ultrasonic speckle.

    PubMed

    Tuthill, T A; Sperry, R H; Parker, K J

    1988-04-01

    The statistics of speckle patterns in ultrasound images have potential for tissue characterization. In "fully developed speckle" from many random scatterers, the amplitude is widely recognized as possessing a Rayleigh distribution. This study examines how scattering populations and signal processing can produce non-Rayleigh distributions. The first order speckle statistics are shown to depend on random scatterer density and the amplitude and spacing of added periodic scatterers. Envelope detection, amplifier compression, and signal bandwidth are also shown to cause distinct changes in the signal distribution.

  3. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  4. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    ERIC Educational Resources Information Center

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  5. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  6. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  7. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  8. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  9. Data-Enabled Quantification of Aluminum Microstructural Damage Under Tensile Loading

    NASA Astrophysics Data System (ADS)

    Wayne, Steven F.; Qi, G.; Zhang, L.

    2016-08-01

    The study of material failure with digital analytics is in its infancy and offers a new perspective to advance our understanding of damage initiation and evolution in metals. In this article, we study the failure of aluminum using data-enabled methods, statistics and data mining. Through the use of tension tests, we establish a multivariate acoustic-data matrix of random damage events, which typically are not visible and are very difficult to measure due to their variability, diversity and interactivity during damage processes. Aluminium alloy 6061-T651 and single crystal aluminium with a (111) orientation were evaluated by comparing the collection of acoustic signals from damage events caused primarily by slip in the single crystal and multimode fracture of the alloy. We found the resulting acoustic damage-event data to be large semi-structured volumes of Big Data with the potential to be mined for information that describes the materials damage state under strain. Our data-enabled analyses has allowed us to determine statistical distributions of multiscale random damage that provide a means to quantify the material damage state.

  10. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    NASA Astrophysics Data System (ADS)

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  11. Non-equlibrium relaxation of vortex lines in disordered type-II superconductors

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Assi, Hiba; Pleimling, Michel; T&äUber, Uwe C.

    2013-03-01

    Vortex matter in disordered type-II superconductors display a remarkable wealth of behavior, ranging from hexagonally arranged crystals and a vortex liquid to glassy phases. The type and strength of the disorder has a profound influence on the structural properties of the vortex matter: Randomly distributed weak point pinning sites lead to the destruction of long range order and a Bragg glass phase; correlated, columnar disorder can yield a Bose glass phase with infinite tilt modulus. We employ a three-dimensional elastic line model and apply a Langevin molecular dynamics algorithm to simulate the dynamics of vortex lines in a dissipative medium. We investigate the relaxation of a system of lines that were initially prepared in an out-of-equilibrium state and characterize the transient behavior via two-time quantities. We vary the disorder type and strength and compare our results for random and columnar disorder. Research supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-FG02-09ER46613.

  12. Robustness and Vulnerability of Networks with Dynamical Dependency Groups.

    PubMed

    Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi

    2016-11-28

    The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.

  13. Multifractal surrogate-data generation algorithm that preserves pointwise Hölder regularity structure, with initial applications to turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2017-03-01

    An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.

  14. Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki

    2014-04-01

    We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.

  15. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  16. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  17. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  18. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  19. The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.

    PubMed

    de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J

    2017-05-16

    Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.

  20. Hundred-watt-level high power random distributed feedback Raman fiber laser at 1150 nm and its application in mid-infrared laser generation.

    PubMed

    Zhang, Hanwei; Zhou, Pu; Wang, Xiong; Du, Xueyuan; Xiao, Hu; Xu, Xiaojun

    2015-06-29

    Two kinds of hundred-watt-level random distributed feedback Raman fiber have been demonstrated. The optical efficiency can reach to as high as 84.8%. The reported power and efficiency of the random laser is the highest one as we know. We have also demonstrated that the developed random laser can be further used to pump a Ho-doped fiber laser for mid-infrared laser generation. Finally, 23 W 2050 nm laser is achieved. The presented laser can obtain high power output efficiently and conveniently and opens a new direction for high power laser sources at designed wavelength.

  1. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  2. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  3. Large deflection random response of cross-ply laminated plates with elastically restrained edges and initial imperfections

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Mei, Chuh

    1988-01-01

    The large deflection random response of symmetrically laminated cross-ply rectangular thin plates subjected to random excitation is studied. The out-of-plane boundary conditions are such that all the edges are rigidly supported against translation, but elastically restrained against rotation. The plate is also assumed to have a small initial imperfection. The assumed membrane boundary conditions are such that all the edges are free from normal and tangential forces in the plane of the plate. Mean-square deflections and mean-square strains are determined for a three-layered cross-ply laminate.

  4. Initiation, adherence, and retention in a randomized controlled trial of directly administered antiretroviral therapy.

    PubMed

    Maru, Duncan Smith-Rohrberg; Bruce, R Douglas; Walton, Mary; Mezger, Jo Anne; Springer, Sandra A; Shield, David; Altice, Frederick L

    2008-03-01

    Directly administered antiretroviral therapy (DAART) can improve health outcomes among HIV-infected drug users. An understanding of the utilization of DAART-initiation, adherence, and retention-is critical to successful program design. Here, we use the Behavioral Model to assess the enabling, predisposing, and need factors impacting adherence in our randomized, controlled trial of DAART versus self-administered therapy (SAT) among 141 HIV-infected drug users. Of 88 participants randomized to DAART, 74 (84%) initiated treatment, and 51 (69%) of those who initiated were retained in the program throughout the entire six-month period. Mean adherence to directly observed visits was 73%, and the mean overall composite adherence score was 77%. These results were seen despite the finding that 75% of participants indicated that they would prefer to take their own medications. Major causes of DAART discontinuation included hospitalization, incarceration, and entry into drug-treatment programs. The presence of depression and the lack of willingness to travel greater than four blocks to receive DAART predicted time-to-discontinuation.

  5. Initiation, Adherence, and Retention in a Randomized Controlled Trial of Directly Administered Antiretroviral Therapy

    PubMed Central

    Maru, Duncan Smith-Rohrberg; Bruce, R. Douglas; Walton, Mary; Mezger, Jo Anne; Springer, Sandra A.; Shield, David

    2009-01-01

    Directly administered antiretroviral therapy (DAART) can improve health outcomes among HIV-infected drug users. An understanding of the utilization of DAART—initiation, adherence, and retention—is critical to successful program design. Here, we use the Behavioral Model to assess the enabling, predisposing, and need factors impacting adherence in our randomized, controlled trial of DAART versus self-administered therapy (SAT) among 141 HIV-infected drug users. Of 88 participants randomized to DAART, 74 (84%) initiated treatment, and 51 (69%) of those who initiated were retained in the program throughout the entire six-month period. Mean adherence to directly observed visits was 73%, and the mean overall composite adherence score was 77%. These results were seen despite the finding that 75% of participants indicated that they would prefer to take their own medications. Major causes of DAART discontinuation included hospitalization, incarceration, and entry into drug-treatment programs. The presence of depression and the lack of willingness to travel greater than four blocks to receive DAART predicted time-to-discontinuation. PMID:18085432

  6. Declines in moose population density at Isle Royle National Park, MI, USA and accompanied changes in landscape patterns

    USGS Publications Warehouse

    De Jager, N. R.; Pastor, J.

    2009-01-01

    Ungulate herbivores create patterns of forage availability, plant species composition, and soil fertility as they range across large landscapes and consume large quantities of plant material. Over time, herbivore populations fluctuate, producing great potential for spatio-temporal landscape dynamics. In this study, we extend the spatial and temporal extent of a long-term investigation of the relationship of landscape patterns to moose foraging behavior at Isle Royale National Park, MI. We examined how patterns of browse availability and consumption, plant basal area, and soil fertility changed during a recent decline in the moose population. We used geostatistics to examine changes in the nature of spatial patterns in two valleys over 18 years and across short-range and long-range distance scales. Landscape patterns of available and consumed browse changed from either repeated patches or randomly distributed patches in 1988-1992 to random point distributions by 2007 after a recent record high peak followed by a rapid decline in the moose population. Patterns of available and consumed browse became decoupled during the moose population low, which is in contrast to coupled patterns during the earlier high moose population. Distributions of plant basal area and soil nitrogen availability also switched from repeated patches to randomly distributed patches in one valley and to random point distributions in the other valley. Rapid declines in moose population density may release vegetation and soil fertility from browsing pressure and in turn create random landscape patterns. ?? Springer Science+Business Media B.V. 2009.

  7. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  8. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  9. Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation

    PubMed Central

    Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan

    2013-01-01

    The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902

  10. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  11. Quantum tunneling recombination in a system of randomly distributed trapped electrons and positive ions.

    PubMed

    Pagonis, Vasilis; Kulp, Christopher; Chaney, Charity-Grace; Tachiya, M

    2017-09-13

    During the past 10 years, quantum tunneling has been established as one of the dominant mechanisms for recombination in random distributions of electrons and positive ions, and in many dosimetric materials. Specifically quantum tunneling has been shown to be closely associated with two important effects in luminescence materials, namely long term afterglow luminescence and anomalous fading. Two of the common assumptions of quantum tunneling models based on random distributions of electrons and positive ions are: (a) An electron tunnels from a donor to the nearest acceptor, and (b) the concentration of electrons is much lower than that of positive ions at all times during the tunneling process. This paper presents theoretical studies for arbitrary relative concentrations of electrons and positive ions in the solid. Two new differential equations are derived which describe the loss of charge in the solid by tunneling, and they are solved analytically. The analytical solution compares well with the results of Monte Carlo simulations carried out in a random distribution of electrons and positive ions. Possible experimental implications of the model are discussed for tunneling phenomena in long term afterglow signals, and also for anomalous fading studies in feldspars and apatite samples.

  12. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  13. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  14. A random wave model for the Aharonov-Bohm effect

    NASA Astrophysics Data System (ADS)

    Houston, Alexander J. H.; Gradhand, Martin; Dennis, Mark R.

    2017-05-01

    We study an ensemble of random waves subject to the Aharonov-Bohm effect. The introduction of a point with a magnetic flux of arbitrary strength into a random wave ensemble gives a family of wavefunctions whose distribution of vortices (complex zeros) is responsible for the topological phase associated with the Aharonov-Bohm effect. Analytical expressions are found for the vortex number and topological charge densities as functions of distance from the flux point. Comparison is made with the distribution of vortices in the isotropic random wave model. The results indicate that as the flux approaches half-integer values, a vortex with the same sign as the fractional part of the flux is attracted to the flux point, merging with it in the limit of half-integer flux. We construct a statistical model of the neighbourhood of the flux point to study how this vortex-flux merger occurs in more detail. Other features of the Aharonov-Bohm vortex distribution are also explored.

  15. Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games.

    PubMed

    Duong, Manh Hong; Han, The Anh

    2016-12-01

    In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.

  16. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  17. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  18. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  19. Investigating the Influence of the Initial Biomass Distribution and Injection Strategies on Biofilm-Mediated Calcite Precipitation in Porous Media

    DOE PAGES

    Hommel, Johannes; Lauchnor, Ellen; Gerlach, Robin; ...

    2015-12-16

    Attachment of bacteria in porous media is a complex mixture of processes resulting in the transfer and immobilization of suspended cells onto a solid surface within the porous medium. However, quantifying the rate of attachment is difficult due to the many simultaneous processes possibly involved in attachment, including straining, sorption, and sedimentation, and the difficulties in measuring metabolically active cells attached to porous media. Preliminary experiments confirmed the difficulty associated with measuring active Sporosarcina pasteurii cells attached to porous media. However, attachment is a key process in applications of biofilm-mediated reactions in the subsurface such as microbially induced calcite precipitation.more » Independent of the exact processes involved, attachment determines both the distribution and the initial amount of attached biomass and as such the initial reaction rate. As direct experimental investigations are difficult, this study is limited to a numerical investigation of the effect of various initial biomass distributions and initial amounts of attached biomass. This is performed for various injection strategies, changing the injection rate as well as alternating between continuous and pulsed injections. The results of this study indicate that, for the selected scenarios, both the initial amount and the distribution of attached biomass have minor influence on the Ca 2+ precipitation efficiency as well as the distribution of the precipitates compared to the influence of the injection strategy. The influence of the initial biomass distribution on the resulting final distribution of the precipitated calcite is limited, except for the continuous injection at intermediate injection rate. But even for this injection strategy, the Ca 2+ precipitation efficiency shows no significant dependence on the initial biomass distribution.« less

  20. Investigating the Influence of the Initial Biomass Distribution and Injection Strategies on Biofilm-Mediated Calcite Precipitation in Porous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hommel, Johannes; Lauchnor, Ellen; Gerlach, Robin

    Attachment of bacteria in porous media is a complex mixture of processes resulting in the transfer and immobilization of suspended cells onto a solid surface within the porous medium. However, quantifying the rate of attachment is difficult due to the many simultaneous processes possibly involved in attachment, including straining, sorption, and sedimentation, and the difficulties in measuring metabolically active cells attached to porous media. Preliminary experiments confirmed the difficulty associated with measuring active Sporosarcina pasteurii cells attached to porous media. However, attachment is a key process in applications of biofilm-mediated reactions in the subsurface such as microbially induced calcite precipitation.more » Independent of the exact processes involved, attachment determines both the distribution and the initial amount of attached biomass and as such the initial reaction rate. As direct experimental investigations are difficult, this study is limited to a numerical investigation of the effect of various initial biomass distributions and initial amounts of attached biomass. This is performed for various injection strategies, changing the injection rate as well as alternating between continuous and pulsed injections. The results of this study indicate that, for the selected scenarios, both the initial amount and the distribution of attached biomass have minor influence on the Ca 2+ precipitation efficiency as well as the distribution of the precipitates compared to the influence of the injection strategy. The influence of the initial biomass distribution on the resulting final distribution of the precipitated calcite is limited, except for the continuous injection at intermediate injection rate. But even for this injection strategy, the Ca 2+ precipitation efficiency shows no significant dependence on the initial biomass distribution.« less

Top